Here is a personal experience that could be related.

I once worked on an application that used a widely known commercial database 
(not Derby),
and a script that someone else wrote took several hours to perform a series of 
deletions
and insertions.  After examining the script, I realized that there was a single 
commit at
the end.  By adding a few commits at appropriate places where the database 
would remain
consistent for use on the application, we were able to achieve an order of 
magnitude
increase in performance.  Hope this helps.

_________________________________________

John I. Moore, Jr.
SoftMoore Consulting

-----Original Message-----
From: Amitava Kundu1 [mailto:[email protected]] 
Sent: Thursday, January 30, 2014 1:46 AM
To: [email protected]
Subject: Issue with large delete in derby


Hi,
We are using embedded derby 10.5.1.1 in our product, This derby database is
used as regular RDBMS where lot of insert, delete and select happens, There
are business entities each of its occurrence could be of size 10 GB and
upward e.g. a huge log file data.
In our application, we use cascade delete and also has referential
integrity constraints ON.

This application runs on 64 bit Linux with 8 GB RAM allocated to JVM.
Similar time is observed in our development Windows box.

 It takes more than 3 hour to delete those entities. During this time all
the relevant tables stay locked and no other operation is feasible.

We'd like know what could be different options/ strategy be adopted for:
   Speeding up the delete process
   Ability to other database activities in parallel


Thanks
        Amitava Kundu

Reply via email to