ObjectDB ObjectDB

Soft Reference Object Cache Recommendation

#1

(Trying again.  Last post, ~50 minutes to compose, failed, possibly due to an attachment upload size limit being exceeded before the post attempt.  =(  )

I desire to have my persistent objects be cached until the Java Virtual Machine (JVM) does a full Garbage Collection (GC) when heap RAM gets low for my application.  This would use a Least Recently Used (LRU) mechanism to get rid of older objects before more recently accessed ones.  I attempted to use Java SoftReference to my large byte[] and instruct ObjectDB to use soft reference for its Level One (L1) object data cache.  I disable the L2 cache and use 1 MB for the ObjectDB Datastore page file cache with page file size of 64 KB.  Here are relevant objectdb.conf settings:

<size initial="64kb" resize="64kb" page="64kb" />
<processing cache="1mb" max-threads="10" />
<query-cache results="16kb" programs="10" />
<cache ref="soft" level2="0" />
<dirty-tracking arrays="false" />

My persisted class, "JdoBlob2", only contains a byte[] and uses a soft reference to make that array eligible for GC when the JVM does a full GC.

private byte[] data;// use makeDirty() when changing this array or any of its elements
private transient SoftReference<byte[]> softReferenceData;

I use StoreCallback and StoreLifecycleListener to have my code called to manipulate the softReferenceData in my JdoBlob2 instances, so there is a strong reference to the byte[] when data needs to be written back to ObjectDB datastore, and a soft reference when the byte[] only needs to be read and sits in the cache until more heap RAM is needed.


@Override public void jdoPreClear(){
    if(null != softReferenceData) softReferenceData.clear();
    softReferenceData = null;
}

@Override public void jdoPreDelete(){
    if(null != softReferenceData) softReferenceData.clear();
    softReferenceData = null;
}

@Override public void jdoPostLoad(){
    softReferenceData = new SoftReference<>(data);
    data = null;
}

@Override public void jdoPreStore(){
    // A dirty object, via setData(), strongly references byte[] via "data" field, so ignore softReferenceData.
    if(null != softReferenceData) softReferenceData.clear();
    softReferenceData = null;
}

public void postStore(){
    if(null != softReferenceData) softReferenceData.clear();
    softReferenceData = new SoftReference<>(data);
    data = null;
}

private static final StoreLifecycleListener storeLifecycleListener = new StoreLifecycleListener(){
    @Override public void preStore(InstanceLifecycleEvent instanceLifecycleEvent){
        // We are using jdoPreStore() in JdoBlob2, which is called after this, so do nothing here.
    }
    @Override public void postStore(InstanceLifecycleEvent instanceLifecycleEvent){
        JdoBlob2 jdoBlob2 = (JdoBlob2)instanceLifecycleEvent.getPersistentInstance();
        jdoBlob2.postStore();
    }
};

Everything seems to work great, but some byte[]s still hang around after a full JVM GC, which I think is due to ObjectDB datastore page file cache keeping strong references to the pages that comprise the data of the byte[] until the JdoBlob2 object that owns it is evicted from the ObjectDB cache, such as when the JdoBlob2 gets deleted.

I use JavaVirtualVm to look at my application heap to see the references to the byte[] and to infer that the byte[]s cannot be released back to the heap after a full GC because of the ObjectDB owned references.  I have attached a screenshot (the heap file was too large to upload).

I desire to have a JdoBlob2 object cache that flushes automatically when JVM full GC happens, which I think should be possible by configuring ObjectDB to use soft references and correct use of my JdoBlob2 in the ObjectDB environment.  Do you have any recommendations on how to achieve this?  Perhaps I am just missing something that you can see?

If I am unable to use ObjectDB to achieve the desired JdoBlob2 caching, then I think I must need to track all JdoBlob2s myself and, upon detecting a full JVM GC, manually evict JdoBlob2s from their relevant PersistenceManager (via persistenceManager.evict(jdoBlob2)).  I would have to maintain the list with a LRU ordering to rid the cache of older JdoBlob2s, leaving the newer ones in the cache, such as delete 1/2 of the list, the older ones, for each full GC detected.

 

edit
delete
#2

(Upon posing, the ObjectDB website gave an error that the uploaded screenshot image failed to upload, but it appears to be attached properly when I check that forum.)

edit
delete
#3

edit
delete
#4

I was able to create a JavaVirtualVm heap dump that was small enough to upload.  (See attachment.)

edit
delete
#5

Additional explanation of the screenshot of post #1 is needed. Does it show the byte[] that is not released and you expect to be released? Can you see a path of strong references from a root that holds that byte[] from being garbage collected?

ObjectDB Support
edit
delete
#6

(Background for all readers:  JavaVisualVm (e.g. "C:\Program Files\Java\jdk1.8.0_102\bin\jvisualvm.exe") can be used to load the heap dump in post #3.  Then one can navigate to Classes, double click "byte[]" to view its instances, click "Compute retained sizes" to see RAM block from GC by those instances, then select one of those instances to view all the references to it noting the legend at the bottom.  See "JavaVisualVmAid.png" for visual guide.)

 

The screenshot of post #1 was taken after triggering multiple full GCs (via the option in JavaVisualVm on the Memory Tab).  Gigabytes of RAM was freed up, and the screenshot shows what did NOT get freed.  Presumably, all items listed therefore have a strong reference path from some root objects somewhere.  (I also had my program create a 1 GB softly referenced byte[] to see that it did indeed get flushed from RAM and this listing after a single full GC.)

I expected all byte[]s from all the JdoBlob2 objects to NOT be listed anymore.  You can see from "retained size" that those objects are just holding themselves from being garbage collected and are NOT holding off other objects ("retaining them") from being GC.  (So, they, for example, do not hold strong references to other objects, preventing those objects from being GC.)

It is my theory that ObjectDB is holding strong references to the byte[]s, thus keeping the byte[]s from being GC.  Supposedly, one can navigate the selected JdoBlob2 object's reference listing in JavaVisualVm to discover which object(s) have strong references to that JdoBlob2 object.  That seems a little difficult to me since there are a lot of ObjectDB objects under that listing, and the objects are obfuscated, making them harder to recognize or understand.  But, here is my thought ...  (Remember that one can open the heap dump in JavaVisualVm and drill down through all the references of the JdoBlob2, more than what was shown in the screenshot.)

I think the screenshot is showing a JdoBlob2 object that cannot be garbage collected.  It has a single reference, a soft one, from JdoBlob2.softReferenceData.referent, which is the desired design where JdoBlob2 refers to its byte[] via the Java SoftReference mechanism, which has a get() to obtain the "referent" or null if the referent has been GC.  So, the referent still references the byte[] because it did not get GC because something had a strong reference to it.

That strong reference seems to be "i", which is an "ENT" class object, which I presume is an ObjectDB obscufated name that I hope ObjectDB Support Personnel can recognize or decode.  Using the heap dump I navigate through a lot of unrecognizable stuff under there (all presumably ObjectDB owned).  I can't make much sense of it, so I look at the other objects that are also prevented from GC and discover a lot of 64 KB sized (i.e. my ObjectDB requested page file size) objects that are not freed.  They are of type "PAG" with a field name of "e", which I imagine might stand for something like "Page Access Group element".

So, I eventually derive at the conclusion that a lot of ObjectDB paging file entries are being retained and they are the actual backing of the byte[]s, and that there must be some strong reference somewhere (help!) deep down in all that ObjectDB object links that keeps my byte[] from being able to be GC as I designed it to be.

"PAGs.png" is a screenshot that shows those PAGs are retaining 64 KB from being garbage collected.

"UniLogical.png" is a screenshot that shows my other objects (e.g. com.unilogical.moxydox.protocol.jdo.JdoDocument) that are also not freed.  They are NOT preventing other objects from being freed ("retained size is zero"), but they are being held back by references which again appear to be owned by ObjectDB ("ENT" entries from PersistenceManager).

So at this point I think ObjectDB Support Personnel could really help me figure out what all those ObjectDB reference are, and to see what is actually holding off my byte[] from being freed.  (Maybe something I am doing in my code with ObjectDB that legitimately implicitly is telling ObjectDB to keep strong references?)  Recall that if I delete the JdoBlob2 via persistenceManager.deletePersisted(jdoBlob2) then the memory gets freed by the next full GC.  I can also close the PersistenceManager and all the memory then can be freed.  I theorize that calling persistenceManager.evict(JdoBlob2) would therefore also allow freeing the byte[] by removing ObjectDB pagefile caching reference.

Any help is appreciated!

(P.S.  Yikes!  Another (~1 h 15m) posting attempt was almost lost doing the attachment uploads.  All seemed to be lost, but then I reloaded the webpage a few times and started another "Post Reply" and most of my in-edit posting then reappeared, but all the attachments were lost.)

 

 

 

edit
delete
#7

More information.

I did a test to see if evict() would allow the RAM to be flushed.  (e.g., code like the following.)

private static final StoreLifecycleListener storeLifecycleListener = new StoreLifecycleListener(){
    @Override public void postStore(InstanceLifecycleEvent instanceLifecycleEvent){
        Object object = instanceLifecycleEvent.getPersistentInstance();
        PersistenceManager persistenceManager = JDOHelper.getPersistenceManager(object);
        persistenceManager.evict(object);
    }
};

It did not help.  Still, the only way I have been able to see the RAM become flushable is by persistenceManager.deletePersisted(myObject) or PersistenceManagerFactory.close().

I inspect JavaVirtualVm memory references.  I see the expected SoftReference to my object, but their is an "i" field of type "ENT" that is a strong reference to that SoftReference.  Under "i", I see cascading ENT and ENT[] references and a lot of __odbTracker fields.  (ENT.png attached for a screenshot.)

I think I will try to create a stand alone test application to help simplify the analysis.

edit
delete
#8

(Side note.  The attachment upload reported a failure again.  This time I captured a screenshot.  Looks like the uploaded attachment actually succeeded though.  "UploadFailure.png" is the screenshot.)

edit
delete
#9

Test program attached.  ("Test.java", "ObjectDb.conf", and a movie that shows how to use it.)  (Higher quality movie available, if necessary, and an alternative upload mechanism is provided to get past 32 MB attachment upload limit.)

edit
delete
#10

edit
delete
#11

PAG instances are pages that are cached by ObjectDB. The cache size is set in this configuration parameter, but note that you cannot eliminate the cache completely, and PAG instances may consume more memory than set in the configuration when an active transactions is preparing to write updates to the database (until commit).

Your test (at #9 above) does not show strong references from ObjectDB to your Data instances after committing a transaction. Note that we got much smaller heap size (4MB) than reported by you when running the test. Assuming you used enhanced classes (as mentioned in your test comments) it may be the result of different JVM configuration.

Selecting Data instances in the Java VisualVM heap dump after commit, clicking right and selecting "Show nearest GC root" results with a VisualVM message box indicating "No GC root found", i.e. it is not hold by ObjectDB (at leased when the test classes are enhanced). Still it is not released on GC because this is the way soft references work (unlike weak references).

ObjectDB Support
edit
delete
#12

Thank you for your investigation.  I will do more investigating on my side later this morning.  For now, here is more information for you.

I use this ObjectDb.conf setting:

<processing cache="1mb" max-threads="10" />

The test is using enhanced classes and inspecting RAM is done after commits, not during.

The JVM can make a difference.  I am using Oracle 8u112.  "server" vs "client" makes a difference.  If you are not running the test on a "server class computer", as defined by Oracle, then give the explicit "-server" java option.  I use the "generation one" garbage collection option and explicitly set the min and max heap sizes.  Another option that affects soft references is the LRU policy, though I have tried setting it to 0 and 1 and note no difference, including manually allocating a 1 GB softly referenced byte[] that immediately gets GC-ed on a full GC no matter what the soft reference LRU policy is set to.

Perhaps you could see if you get different results (closer to the ones I saw) when using these JVM startup parameters for the test:

-server
-XX:InitialHeapSize=2000M
-XX:MaxHeapSize=2000M
-XX:+UseG1GC
-XX:SoftRefLRUPolicyMSPerMB=time

It seems that you feel the issue is JVM/soft-reference related and no ObjectDB options or explicit flushing requests will change the behavior.  Yes?  Any other resolution suggestions?  Perhaps setting ObjectDB to use weak references but then using persistent object callback/listeners to track all objects in a manual strong reference list and upon GC detection manually remove the strong references in hopes that ObjectDB can then immediately free the weakly referenced objects from the object cache and the datastore pagefile cache?

Thank you.

 

 

edit
delete
#13

Are you sure ObjectDB is not holding on to memory when it should be freed?

Even after using "weak" references, and closing PersistenceManagerFactory, there are PAGs remaining with a GC Root of "persistenceManager (java frame)", as seen by this improved test, test output, and screen shot ...

 

Maybe a better question is, how can I have a single PersistenceManagerFactory, with a single PersistenceManager, retain objects for caching, but free older ones when RAM gets low?

 

 

edit
delete
#14

> It seems that you feel the issue is JVM/soft-reference related and no ObjectDB options or explicit flushing requests will change the behavior.  Yes?

Regarding using soft references for L1 cache (the PersistenceManager) it indeed looks as a JVM tuning issue, because ObjectDB has no control on soft references (as long as ObjectDB use soft references rather than strong references when applicable).

> in hopes that ObjectDB can then immediately free the weakly referenced objects from the object cache and the datastore pagefile cache?

These are two different topics. ObjectDB doesn't hold strong references to entity objects unless necessary (e.g. for dirty objects until commit). On the other hand the data store page cache is fixed and not released by ObjectDB. You can try reducing its size.

> Are you sure ObjectDB is not holding on to memory when it should be freed?

We are always open to check possible faults and fix them, but the soft references to entity objects behaviour seems normal.

> Even after using "weak" references, and closing PersistenceManagerFactory, there are PAGs remaining with a GC Root of persistenceManager (java frame).

That is right. The page cache may not be removed when the PersistenceManagerFactory is closed, only when the PersistenceManagerFactory is collected by the GC, which is usually soon after it is closed. Maybe the cache should be released sooner, but probably this is not your main issue, as you can always discard the PersistenceManagerFactory after closing it.

ObjectDB Support
edit
delete
#15

Thank you, those responses are helpful.  With those, and the additional testing I have been doing, I think the best approach is to use soft references and tune the JVM.

edit
delete
#16

Can you please give additional information about ObjectDB's usage of datastore page cache?  I understand that the ObjectDb.conf value I specify (<processing cache="1mb" max-threads="10" />) is a HINT to the number of PAGs that may be created.  I also understand that those PAGs are fixed and ObjectDB will not free them.  I also understand that more PAGs may be created during a commit operation as needed.  (Are those additional PAGs freed after the commit?)

What I don't understand is that the number of PAGs seems to continue growing well beyond the requested limit.  For example, in my testing, I request a limit of 1 MB, but I see over 5 MB of PAGs after all transactions are complete.  (Still there after many full GC, including low memory conditions.)  Is there some limit to the expansion of PAGs that I can rely on?  For example, if it may grow up to 5 times larger than I request, then I could request 1/5 of the limit I desire.  (1 MB divided by 5 for processing cache value).

I am concerned that the PAGs will grow beyond my control when my production application is servicing hundreds/thousands of databases.  If they all grow 5 times (or more) than the RAM limit I architected for, then I will be hitting OutOfMemoryErrors with no way to free the real memory problem ... a lot of PAGs that I cannot free and that ObjectDB will not free.

Using -XX:SoftRefLRUPolicyMSPerMB=0 seems to give my test program all the desired behavior.  The last remaining issue is to figure out how to keep PAGs in some sort of deterministic RAM usage bounds.

Thank you.

 

edit
delete
#17

(Please don't miss post #16 above.)

A follow up note to give kudos to ObjectDB.  With the aid above, I made implementation changes to my product code.  So far amazing results have been made.

I see my product is over 20 times faster than with the previous database (JavaDB/Derby) and uses less than 1/2 the resources as before!

Thank you to you, your team, and ObjectDB!

edit
delete
#18

Just like I told you: a great product and a great company that is making life much easier for a lot of grateful people.

edit
delete
#19

Thank you for your kind words.

The page cache size as specified in the configuration is indeed a hint. ObjectDB allows up to 80 additional pages (in addition to the pages that are included in the requested cache size) before it starts purging pages from the cache.

In addition, pages with updates until commit are hold by ObjectDB regardless of the cache size, but only until commit.

Note that if you open several databases then cache and pages are per database.

80 pages x 2K (default page size) seems unharmful, but maybe we should change it for larger page sizes, as 80 pages x 64KB may be too much for some applications.

ObjectDB Support
edit
delete
#20

Perfect!  Thank you!

edit
delete

Reply

To post on this website please sign in.