ObjectDB Database Search

101-150 of 187 results

ObjectDB shuts down

It might be that CentOS starts to kill processes if it runs out of memory , in order ... /Tomee) was using too much memory (out of the total available), thus CentOS was killing it. zmirc

Java 6/7 Performance Comparison

and then again after adjusting the objectdb.conf to grant more memory /threads to applications but have not noticed any ... with a profiler (e.g. Java VisualVM) and provide profiling info? support Support Attached is the CPU and Memory

Tutorial issue

class previously and the database had a memory of it. By starting with a new database that memory

Create simple index for a Id field

retrieval and reading operations (see this comparison ). But obviously a HashMap in memory is faster ... your own HashMap cache in memory for specific objects if you need the extra performance. But note

Optimistic lock

I made to the object in memory are retained and I simple need to create a new transaction, persist ... of cause) retain their values in memory . Objects can be restored (to the values in database) by setting

More Efficient Primary Keys

Can you give us a recommendation as we can represent the Key (UUID) most efficient? Both in terms of memory usage of the database files, memory usage of the JVM and efficiency for queries. We already need the key before the entities are persisted so that we can build even references. a) Composite

Database(*.odb) file size on disk

memory always used by new entities? -Will be the database file automatically reduced?   We have created a small test, in which we create and delete entities. It looks like that the released memory

Custom Classloader: Issues with the Enhancer Agent

version in memory first. However, the main issue is that the Agent ... the enhancement to the class and loads the modified version in memory first. Not exactly. The enhancer

2.7.3_03 Erroneous "Attempt to modify a primary key of a managed object"

without database schema change, and save some space in memory . You can still use StringIdentity ... id; and that might fix the issue, save memory , and not require database schema change (automatically

ObjectDB JPA 2.2 support ( java.util.stream.Stream )

Hi , in my database i want to fetch resultlist as big chunks.JPA 2.2 has new feature named Stream. So i can fetch all JPA queries with less memory . is there any news for objectdb ? Will you update ... , similarly to the way that JPA 2.2 streams work. To free memory , however you will have to clear

Slow loading from db

) there are no issues. During the load, the program is using a lot of memory , but still not ... the number of objects that you load into memory at any given time. If possible, please provide a minimal

ObjectDB is opening too many sockets

ObjectDB activity is observed. A heap dumps of the server (i.e. a snapshot of the used memory ) may reveal ... when there is a lot of free memory . So I think, that we would maybe need to run some script periodically

Pooling of EntityManagers

is represented by different memory objects in different EntityManager instances. The possible issue ... , and the new thread that got the ownership on that  EntityManager is working on the same memory

Navigation to Collection Elements

/ PersistenceManager is open. If the referenced objects are not already in memory they will be loaded

Vs. EHCache

the fastest solution is to use memory -based HashMap with serialization file as a persistence back end. support Support

evictAll() behavior

one transaction and i want to reclaim memory after commit or even better after flush but  I

Is there any faster select method?

you don't need a SELECT query. By the way, an invocation of persist is very fast because it is a memory

Query perfromance problem

activity after first query, so I assume data are in memory already.   lwalkowski Lukasz

Insertion Speed Rate and Batch Load

This test from the Memory Leak thread uses a separate transaction, and moreover, a new EntityManager

Multithreading Test and Deadlock

threads are slower because of higher memory usage (4 x 10,000 x 100 strings x attribute length = many

Is there a way to set EAGER fetch on all "ToMany" collection relationships

as the fetch type of all relationships may cause loading the entire database into memory

Connection is closed (error 526)

Lai Yang Maybe you are trying to load the entire tree into memory . Try replacing EAGER with LAZY

object creation during pmf.getPersistenceManager()

during this invocation is really huge, that causes OutOfMemory during commit ( memory for jvm 1,4GB

find repeated records by JPQL

while trying to load all the entity objects into a memory data structure is expected. You may try

Working with a cleared cache

I'm working on trying to improve the performance of our database processing class and I'm attempting to keep the L1 cache within the PersistenceManager cleared in an effort to reduce its memory footprint. The code snippet below is part of a Callable task that is run repeatedly with a given list

One transaction or two?

to consume more memory . Since you are asking about add/remove operations rather than read operation

using DbUnit with ObjectDb

that can fit into the main memory , doesn't it? Can you use it to load many GBs of data? If you load the data

Explorer in 2.3

, threads, query-cache, memory usage, etc...) to use for tuning b. Show all database connections

Time Sequence Object Command DB

class.  In memory , all objects are defined by the latest schema.   So is there a way

Issue with orphanRemoval and multiple EntityManagers

are not identical. However, enhanced classes are much more efficient in performance and in memory

Significant I/O costs during batch update or insert data.

ObjectDB, which may increase memory usage and cause paging. If you still experience a problem

Occasionally slow requests in load testing

. There is also plenty of free RAM for any other memory needs. We are using SSDs in production

Iterating over large result set

Hi, Few days ago I was writing some code to iterate over quite a large dataset. Fitting those data in memory was quite a challenge even with 3GB heap space. So I did paging using two loops. for (int page=0; page

Merge of entity classes with "mapped by" very slow

where 'child' part of relation have almost 10k objects, so merging parent is also memory inefficient

Selective merge/cascade of detatched entity

is an unusual one, in that it keeps all objects in memory at all times, in a Detached state (I realise

Bidirectional OneToMany is slow on updated items

the find calls of Address and Person instances with an in memory HashMap and get calls. support Support

ObjectDB 2.5.6

a NullPointerException in query compilation. Fixed a memory leak (Negative snapshot user count) in UPDATE queries on uncommitted changes ( issue #1407 ). Fixed a memory leak in reflection mode (when using

NullPointer when accessing entity field

database object, but each EntityManager has a separate memory representation of that object, and should not use memory objects that are managed by other  EntityManager instances. Apparently

InternalException when using MEMBER OF on a large list in a query

. Can you give us a recommendation as we can represent the Key (UUID) most efficient? Both in terms of memory usage of the database files, memory usage of the JVM and efficiency for queries. We already need

ObjectDB 2.6.4

Added memory only database mode. Added an option to cascade detachment to embedded objects . Added an option to disable reuse of database page objects . Improved release of query resources. Improved memory management in multi flush transactions. Extended detachment of embedded objects

Unexpected error when loading all entity instance

. It seems there was insufficient memory on the maschine. I will observe the issue and update if I find a way to reproduce it. Best wishes doppelrittberger Markus Ritter Insufficient memory may cause a crash

huge heap usage

we are using objectdb 2.7.2_6. At a custumer we have huge problems with the memory consumption (see the heap dump attached). The database size is pretty small (1.4 GByte). Any idea ? hgzwicker Hans ... about the roots and paths that keep these objects in memory (particularly roots and paths

Database File Compression

ObjectDB database files are usually smaller than database files of other Database Management Systems (object databases and relational databases) but additional internal compression is still possible. Usually smaller files are also faster because less I/O operations are required and memory cache

ObjectDB 2.0.0

an entity after a failed find followed by persist. Fixed a memory leakage in eager load of large object

Monitoring and Performance counters

from last start - caches hit rate - total sessions from clients - memory usage (for heap space

InternalException during producer/consumer scenario

) The number of objects produced can be potentially exceed available memory so can be held on disk

Navigation through lazy loading from Detached Objects

yet, but for example, it could affect memory consumption and file closing if detached objects (which are not detached

Resource leaking via a rogue Shutdown Hook

static initialiser ( ). Attaching screen-shots of the memory foot-prints with a live application

Terracotta Ehcache Integration

Hello , it will be good if you make alliance with terra-cotta ehcache. Their distributed memory cache is very fast.IF we mix with your engine. It will be very great for us. kadirbasol Kadir BASOL

ObjectDB 2.4.4

Improved memory management when handling very large transactions ( issue #935 ). Improved the Doctor to handle additional situations. Added support in retrieving metamodel without connection ( feature request #896 ). Added support of disabling logging . Improved missing license notification