Server closes if large query runs out of memory

#1

We are using ObjectDB in production, and have found what we thing to be a serious problem:  the objectdb server can run out of memory and stop if a query deals with too many objects.

This isn't the behaviour we expected for a product that we want to use to deliver 24/7 high-volume data collection.

ObjectDB surely has some idea of the sizes of objects that it persists and retrieves, and so should be able to fail safely if it finds too many objects being retrieved to fit into memory. There must be some alternative to crashing!

If this memory estimate is too crude for some people, how about making it a non-default configuration option? 

#2

This issue was discussed in previous forum posts. We are aware of this problem. ObjectDB includes a mechanism that reduces memory allocation (and uses temporary files instead of RAM) when memory usage is high, but it is far from perfect and can easily fail. It will be addressed with high priority in future versions of ObjectDB.

Note that some operations are more difficult in Java than in other platforms, and recovery from OutOfMemoryError is one of them. It is not always possible to handle this exception and a JVM crash is often the result. So ObjectDB users must use a configuration that will keep sufficient reserve of Java heap space in any scenario.

Regarding a query that returns too many objects, consider limiting the number of result objects.

If you must support very large query result sets, check the temp threshold setting. If you reduce the threshold then ObjectDB will use temporary files instead of RAM more often. Processing complex queries may require using many temporary lists, so unfortunately the value of this parameter is not straightforward and you will have to check its effect on your application.

Reducing the number working threads in the server can also help.

More specific analysis and help will be possible if you can provide a memory heap snapshot.

ObjectDB Support
#3

The memory problem occurred during JDO deletion query, and the temp threshold was set and was low (64mb).  There is an easy work-around, which is to divide up the deletion into smaller number of objects.  I understand that JPA bulk delete doesn't involve any object retrieval, whereas perhaps JDO deletion does.  Would using JPA for large deletions be less demanding of memory?

#4

You are right. DELETE queries are implemented in ObjectDB differently for JDO and JPA:

  • In JDO: a SELECT query is run on the server, results are sent to the client and deleted there. This enables full synchronization of the client and the server, but very inefficient if you delete a lot of objects.
  • In JPA: a DELETE query is run directly on the server, objects are not sent to the client. This may break synchronization of the server and client (the client may not be aware that some objects that it manages have been deleted), but it is much more efficient.
ObjectDB Support

Reply