Hi,
we are evaluating ObjectDB 2.5.1_04 in embedded mode hoping to replace SQLite in our application.
We observed that in our case it takes on average more than 4 minutes to complete the first SELECT query that we run on the DB which contains about 320,000 records of a single type of object. SQLite in comparison responds within few seconds on average with the same number of records.
We hope to receive your recommendation on how to improve this result with this post.
Test System
We are running our tests on a 64bit Linux (CentOS 6.3) machine with JDK 1.7. The system has 4Gb of RAM (DDT III - 1333) and i5-3470 3.20GHz CPU (4 cores). The motherboard that we are using is Intel DQ77MK.
While we test:
- CPU utilisation never goes beyond 20% on one of the CPUs. The rest are hardly used.
- SWAP memory is available but never used (no swapping occurs).
POJO
We store object of a single class in the DB. The class has data members that are basic Java types. Here is extract of the code (related to the query that we run).
@Entity(name="RecordingMetaData") @SequenceGenerator( name="RecordingMetaData_sequence", sequenceName="RecordingMetaData_sequence", allocationSize=1 ) @Indices({ : // Indices building trees in client applications @Index(members={"startDate", "toExtension"}), @Index(members={"startDate", "fromExtension"}) }) public class RecordingMetaData { private Long id; // Primary key - annotated with @Id private Long version; // Annotated with @Version : private Short channelNumber; : private Date startDate; // Date: date part of startDateTime private Date startTime; // Time: time part of startDateTime : private String toExtension; private String fromExtension;
Operation
We use two EntityManagers in our code, each running in a different thread:
1. The first EntityManager only adds records to the DB at a rate of 5 records per second. Each add is a single new object and is wrapped in a transaction.
2. The 2nd EntityManager runs the queries (JPQL SELECT statements) shown next.
Query
We test with the following query:
SELECT r from RecordingMetaData AS r ORDER BY r.startDate DESC, r.startTime DESC, r.channelNumber DESC
We limit the result to 50 records.
Basically we wish to get the last 50 records stored in the database, or youngest 50 records.
DB configuration
We use following configuration in objectdb.conf:
<objectdb> <general> <temp path="$temp" threshold="64mb" /> <network inactivity-timeout="0" /> <url-history size="50" user="true" password="true" /> <log path="$objectdb/log/" max="8mb" stdout="false" stderr="false" /> <log-archive path="$objectdb/log/archive/" retain="90" /> <logger name="*" level="info" /> </general> <database> <size initial="1024mb" resize="256mb" page="2kb" /> <recovery enabled="true" sync="false" path="$objectdb/" max="512mb" /> <recording enabled="false" sync="false" path="$objectdb/" mode="write" /> <locking version-check="true" /> <processing cache="256mb" max-threads="10" /> <query-cache results="64mb" programs="50" /> <extensions drop="temp,tmp" /> </database> <entities> <enhancement agent="false" reflection="warning" /> <cache ref="weak" level2="512mb" /> <persist serialization="false" /> <cascade-persist always="auto" on-persist="false" on-commit="true" /> <dirty-tracking arrays="false" /> </entities> <schema> </schema> <server> <connection port="6136" max="100" /> <data path="$objectdb/db" /> <!-- <replication url="objectdb://localhost/test.odb;user=admin;password=admin" /> --> </server> <users> </users> <ssl enabled="false"> </ssl> </objectdb>
How we test
Teh issue happens with the first query that we run after we start the application that uses the DB in embedded mode. So we basically start the application and let is stabilise for about 60 seconds. By this time a number of new records are already being added by one of the threads. Then we run one of the above query and time it until we get the result set of 50 records.
To run the test again, we stop the appliaction and then repeat the above.
Explanation on why this is happening and how we can avoid it will be very much appreciated. Our next test point is 500,000 record, then 700,000 and then 900,000. At this stage we expect the appliaction to be basically unusable when we hit the 500,000 mark unless we find a solution to the issue.
Thank you
Emil