Hi guys : this morning I was facing some issues with the JDBM backend.
I tried to inject thousands (80 thousands) entries in the base with numerous indexes. So far, so good. It took around 6 minutes to get them all injected, but now, if I check the created files, I see that there are no .db, only journal files (.lg). Of course, after a restart, those db files will be created, but this is certainly not how the backend should work. I checked the code and saw that we never flush the journals through a transactionManager.synchronizeLog() call, except for the master table. Too bad :/ But it explains why most of the files aren't flushed (especially the indexes). So I added this call to transactionManager.synchronizeLog() for every index. Well, the .db files were now growing, but I faced another problem : some exception (ConcurrentModificationException) deep in JDBM. After an hour or two trying to understand what was going on, I decided to disable the CacheRecordManager to use a standard BaseRecordManager. The performances are just awful (5 times slower) but this was 'expected', but even worse, I still had some concurrent modification exceptions :/ Then I realise that the SyncWorker was called every 15 seconds, and was calling the sync() method, and it breaks when a put() is done at the same time on one of the flushed index. At the end, I decided to use the same lock that we use for write operation (the lock is part of the OperationManager implementation, so I exposed it so that it can be used outside or the OM). And it worked. The only drawback is that while the hournals are flushed, the server is blocked for a few period of time (depending on how many elements we have committed). It's far from being perfect, and I wish we can have a better solution, but I don't thik that JDBM will make us any favor here... -- Regards, Cordialement, Emmanuel Lécharny www.iktek.com
