Home » Support » Forum » NegativeArraySizeException on commiting a transaction

NegativeArraySizeException on commiting a transaction



we started to get a java.lang.NegativeArraySizeException from ObjectDB upon changing an object in the DB and committing the transaction. I tried to repair the DB file with Database Doctor, but the same error is happening also on the fixed DB file.

Btw. during fixing the DB file an Internal error was thrown (details here: http://www.objectdb.com/database/issue/285 ), but the fixing went on and created a new file.

We are using on production version 2.6.4.b01 in embedded mode. But after this bug appeared I tried to use the latest available version 2.6.9_07 and it behaves the same way. The strange thing is that the bug appears only upon changing one particular object in the DB. Others work fine. So because of this I can not provide a test case.

The stacktrace from version 2.6.9_07 is attached. I also attached screenshots from the debug enviroment showing the values of the variables in the last 6 methods of the stacktrace.

I would appreciate help in this issue asap.


text/plain iconstacktrace_2.6.9_07.txt (5.27KB)
Stacktrace of NegativeArraySize

We will need more information on this specific object.

Is there a possibility to get access to the database file to check the problematic object?

The stack trace indicates an unexpected overflow during string writing. What are the sizes of strings in that object and what character set is used? Maybe there is a bug in writing specific type of characters.

Can you update this specific object (and commit) after emptying its string fields?


ObjectDB Support
ObjectDB - Fast Object Database for Java (JPA/JDO)

If you use a deep nested embedded objects in this entity object please explain the relationships between the involved classes. 

ObjectDB Support
ObjectDB - Fast Object Database for Java (JPA/JDO)

All the strings are standard unicode strings which is the default in JVM (I guess UTF-16).

But the overflow is not bound to strings, because when I tried to remove the given embedded object (PlacementBlockModel) from the entity and saved it again, I got the NegativeArraySize when writing a HashMap in another PlacementBlockModel.

The issue is rather connected to the size of the entity being written. As u can see in the screenshots, the size of the byte[] on which objectDB operates is above 1GB, so I am guessing that my entity can have this size.

It's a quite complicated structure called ProjectVersionHistory which holds in an ArrayList 80 instances of Project.

Each Project than contains hundreds of ProjectPart-s and in each ProjectPart there are hundreds of PlacementBlock-s.

Each PlacementBlock has a HashMap with a few dozens of entries which are mapped via String keys.

So a rough estimation gives around 80*500*200*50 = 400 000 000 Strings contained in that entity. I think this amount can easily result in >1GB object size in the database.

The NegativeArraySize came in the moment when I was trying to add the 81st Project in the ProjectVersionHistory.

The entity itself was not corrupted though, because when I tried to make a copy of it (I made it transient and persistent again), it created a new object in the DB which was behaving in the same way as the old one, only it has different OID.

Then I deleted 60 instances of Project from the ProjectVersionHistory, so only 20 remained and since then my changing transactions are working fine on that original object which was failing.

Can it happen that ObjectDB is indexing something about that byte[] with an int which is overflowing if the byte[] grows over 1GB ?


Your observation is correct. ObjectDB does not support an entity object > 1GB at the moment, and unfortunately trying to store such a large object results in an exception as the one you got, with no clear error message.

We will check what can be done to increase the limit.



ObjectDB Support
ObjectDB - Fast Object Database for Java (JPA/JDO)

Following your report we increased the limit, so now entity objects of almost 2GB are supported. Some ObjectDB data structures are limited to 2GB so due to some additional overhead you cannot persist a 2GB object, but starting build 2.6.9_08 you can store entity objects larger than 1GB (which failed in previous versions), e.g. 1.5GB.

However, note that ObjectDB is not designed for storing such a large size of data in a single entity object, and it is highly recommended to consider changing your current design that uses huge entity objects with many nested levels of embedded objects with a better separation to multiple entity objects.

ObjectDB Support
ObjectDB - Fast Object Database for Java (JPA/JDO)

Thank You for the fix. We will consider reorganizing our data structures, but with this raise we will have enough time to plan it carefully.

Post Reply

To post a reply and/or subscribe to update notifications - please login