OutofMemory error with Object DB 2.0

#1

Hi,

I am getting the below OutOfMemoryError after I have migrated from objectdb 1.0 to 2.0

The application works fine with only 256MB heap size with objectdb 1.0 

But on 2.0 even with max heap size of 1024MB, I get this error.

The application reads xml file of around 1GB and creates a new odb file and inserts 200 records at a time.

The transaction obtained from the PersistentManager is commited after persisting 200 records and then the same transaction object from the same PersistentManager is used to begin the transaction.

With objectdb 2.x do we need to close the PersistentManager after every commit or how do I resolve this issue. Please help?

================================================

java.lang.OutOfMemoryError cannot be cast to java.lang.RuntimeException
at com.objectdb.o.JDE.f(JDE.java:50)
at com.objectdb.o.OBC.onObjectDBError(OBC.java:1504)
at com.objectdb.jpa.EMImpl.commit(EMImpl.java:279)
at com.anritsu.eu.database.DatabaseImplementation.storeItem(Unknown Source)
at com.anritsu.eu.database.DatabaseMessageStorer.newItemParsed(Unknown Source)
at com.anritsu.eu.datalogger.parsing.RedactorMap.notifyRedactors(Unknown Source)
at com.anritsu.eu.datalogger.parsing.MessagePrimitiveContentHandler.endElement(Unknown Source)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.dtd.XMLNSDTDValidator.endNamespaceScope(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.dtd.XMLDTDValidator.handleEndElement(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.dtd.XMLDTDValidator.endElement(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.anritsu.eu.datalogger.parsing.ParserGroupThreadRunner.run(Unknown Source)
#2

It is hard to figure out the cause of an OutOfMemoryError based a the stack trace. If you can provide a heap dump of the application it will show what objects fill the heap.

But one possible reason for OutOfMemoryError in ObjectDB 2.x that wasn't possible in ObjectDB 1.x is using ObjectDB with no enhancement (which is supported but not recommended in ObjectDB 2.x, and impossible in ObjectDB 1.x). In that case, new objects are not released on commit automatically. This could be solved by calling JPA's clear or similar methods, but if this is the problem, it would be better to solve it by enhancing the classes.

ObjectDB Support
#3

The following objects fillup most of the heap:

com.objectdb.o.ENT
java.lang.ref.WeakReference
com.objectdb.o.SLV

I am not able to attach the heap dump as its 1GB.(however, attached the screen shot from the profiler)

I have also tried persistenceManager.evictAll() after commit, it does not make difference.

All the entity classes in my package.jdo have been enhanced using com.objectdb.Enhancer. (they were also enhanced when objectdb 1.0 was used)

Is there any other way to release the objects on every transaction commit.

 

 

#4

The question is who holds the com.objectdb.o.ENT instances with strong references?

It may be the result of creating multiple PersistenceManager instances without closing them.

ObjectDB Support
#5

When I received this error I was only dealing with one odb file. The profiler shows only 1 PersistenceManager instance (com.objectdb.jdo.PMImp)

The application tries to persist records one by one and commits the transaction for every 200 records (reads from xml and persists to odb)

The PersistenceManager is only closed at the end after commiting all the records to the odb file.

#6

Can you upload a runnable application that demonstrates the OutOfMemoryError?

Use a support ticket if you can upload the application but prefer to do that privately.

ObjectDB Support
#7

I will not be able to submit the actual application as it cannot be run outside our development environment.

I am not sure if I can create a small application to demonstrate the same issue.

Also if I close the PersistenceManager every time I commit a transaction and open new one for next transaction, the OutOfMemoryError is resolved with odb 2.x(with 1.0 I don't have to close PM with every commmit). This shows that the objects are only cleared from heap on PersistenceManager closure and not on transaction commit.

 

Here is the code snippet that is involved in persisting a record in the odb. This code is called in a loop till all records are persisted and at the end the PersistenceManager is released.

                PersistenceManager persistenceManager = getPersistenceManager(uniqueTestId, true, true);
                Transaction transaction = persistenceManager.currentTransaction();


                // Commit every MAXIMUM_NEW_RECORD_TRANSACTION_SIZE (currently 500) records
                // This is becasue ObjectDB keeps all items in memory during a transaction
                if ( (writeItemNumber > 0) && ((writeItemNumber % MAXIMUM_NEW_RECORD_TRANSACTION_SIZE) == 0)) {
                    transaction.commit();

                    /** In 2.x if PersistenceManager is release here then there is no OutOfMemory but it worked fine with 1.0**/
                     //releasePersistenceManager(uniqueTestId);
                     //persistenceManager = getPersistenceManager(uniqueTestId, true, true);
                     //transaction = persistenceManager.currentTransaction();
                    writeItemNumber = 0;
                }

                // are we at the start of a new transaction?
                if (writeItemNumber == 0 && !transaction.isActive()) {
                     transaction.begin();
                }

                writeItemNumber = writeItemNumber + 1;
                persistenceManager.makePersistent(itemToStore);

 

 

#8

Your code looks fine.

You can upload a large heap dump to ObjectDB public FTP server:

Host: objectdb.com, Username: public, Password: public

 

ObjectDB Support
#9

I am not able to connect to the ftp site objectdb.com to upload the heap dump.

My local IT team has adviced me to to check if the ftp site is up.

If I ping objectdb.com(173.212.227.26) from my PC, I do not get any response.

Can you confirm if the ftp site is working, so that I can go back to my local IT for help.

#10

The FTP server is up and working. Ping as well. It may be a firewall in your workplace.

ObjectDB Support
#11

I can connect to other ftp sites from my PC, but cannot connect to objectdb.com

Our IT team is still looking into it.

 

#12

Meanwhile other users used the FTP successfully.

Anyway, you can also put the file anywhere on the web and provide a link.

ObjectDB Support
#13

Hi,

Heap dump file java_pid5012.hprof has been uploaded to ftp site objectdb.com for your investigation and advice.

Thanks

Madhu

#14

The cause of the problem is still unclear.

As a workaround, try clearing the persistence context after every commit using:

    ((EntityManager)pm).clear();

 

ObjectDB Support
#15

I have tried ((EntityManager)pm).clear(); after every transaction commit and I still get outofmemory error.

In terms of biggest object dominators, the profiler shows the same (com.objectdb.o.PAG)

#16

In the heap dump that you sent the problem is not with PAG objects (which take only 20MB) but with ENT objects.

The clear method should release extra ENT objects but is not related to PAG objects.

If there is still a problem please upload a new heap dump that reflects the new situation.

ObjectDB Support
#17

I am uploding the screenshots from the latest heamp dump (and the previous heap dump which u alead have).

Not sure if there is much change in situation.

If you think there is a change in the situation, I will have to arrange heap dump upload(last time IT team had to upload it from another outside network).

 

#18

Yes, the problem now is different. Could you please post your objectdb.conf file?

The database cache, which is limited to 64MB by default, takes about 640MB - 1GB.

Uploading the new heap dump could help.

ObjectDB Support
#19

Contents of the objectdb.conf file (also attached). I think I have disabled recovery which was enabled in default conf:

<!-- ObjectDB Configuration -->

<objectdb>

<general>
  <temp path="$temp" threshold="64mb" />
  <network inactivity-timeout="0" />
  <url-history size="50" user="true" password="true" />
  <log path="c:/temp/odb/log/" max="8mb" stdout="false" stderr="false" />
  <log-archive path="c:/temp/odb/log/archive/" retain="90" />
  <logger name="*" level="fatal" />
</general>

<database>
  <size initial="256kb" resize="256kb" page="2kb" />
  <recovery enabled="false" sync="false" path="." max="128mb" />
  <recording enabled="false" sync="false" path="." mode="write" />
  <locking version-check="true" />
  <processing cache="64mb" max-threads="10" />
  <query-cache results="32mb" programs="500" />
  <extensions drop="temp,tmp" />
</database>

<entities>
  <enhancement agent="false" reflection="warning" />
  <cache ref="weak" level2="0" />
  <persist serialization="false" />
  <cascade-persist always="auto" on-persist="false" on-commit="true" />
  <dirty-tracking arrays="false" />
</entities>

<schema>
</schema>

<server>
  <connection port="6136" max="100" />
  <data path="$objectdb/db" />
  <!--
  <replication url="objectdb://localhost/test.odb;user=admin;password=admin" />
  -->
</server>

<users>
  <user username="admin" password="admin">
   <dir path="/" permissions="access,modify,create,delete" />
  </user>
  <user username="$default" password="$$$###">
   <dir path="/$user/" permissions="access,modify,create,delete">
    <quota directories="5" files="20" disk-space="5mb" />
   </dir>
  </user>
  <user username="user1" password="user1" />
</users>

<ssl enabled="false">
  <server-keystore path="$objectdb/ssl/server-kstore" password="pwd" />
  <client-truststore path="$objectdb/ssl/client-tstore" password="pwd" />
</ssl>

</objectdb>
#20

The cache should be 64MB per open database, so assuming you have one open database there is a cache memory problem. The heap dump may help in understanding the problem.

ObjectDB Support
#21

Does it mean we can only have an open database that is not more than 64mb at a time.

In this application my odb file that needs to be generated can be a max of 3.6GB (used to work with objecdb 1.x)

It now stops(outofmemory) when the odb file that is being generated reaches a size of around 700mb to 800mb (which is close to max heap size of 1012mb)

If this problem is different, will try to upload the heam dump.

#22

A database of 3.6GB is not very large. ObjectDB can handle much larger database files.

With no ability to see the application running it is difficult to understand the exact cause of the problem, but maybe the heap dump would help.

ObjectDB Support
#23

Hi,

I have uploaded the latest heap dump java_pid976.hprof to ftp site after using ((EntityManager)pm).clear(); after every transaction commit.

Please investigate and advice asap, we need to complete out upgrade to Objectdb 2.x asap and this is the only blocking issue for our release.

I am also attaching our package.jdo files (previous file used with 1.0 and current file migrated to 2.0). Please let me know if there are any issue in this.

 

#24

Previous version of package.jdo

#25

Build 2.4.3_06 includes a fix that may solve the problem. Please try it.

ObjectDB Support
#26

Thanks,

I have tried Objectdb 2.4.3_06 and it has resolved the outofmemory error.

Reply