Peter, try this if it makes a difference:

1. Select entries to be deleted, note their primary keys.
2. Issue delete using keys to be deleted (1.) and use short transaction
batches.

On Sun, 6 Oct 2019, 01:33 Peter, <tableyourt...@gmail.com> wrote:

> Hi,
>
> I have a table "mytable" with columns "id", "created_at" and "json"
> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
> entries every hour and I would like to keep only entries of the last 1
> or 2 hours. It is expected behaviour for the user if too old entries
> gets lost as it is some kind of a LRU cache.
>
> The current solution is to delete entries older than 4 hours every 30
> minutes:
>
> DELETE FROM mytable WHERE created_at < ?
>
> I'm using this in a prepared statement where ? is "4 hours ago" in
> milliseconds (new DateTime().getMillis()).
>
> This works, but some (not all) INSERT statement get a bigger delay in
> the same order (2-5 seconds) that this DELETE takes, which is ugly.
> These INSERT statements are executed independently (using different
> threads) of the DELETE.
>
> Is there a better way? Can I somehow avoid locking the unrelated INSERT
> operations?
>
> What helps a bit is when I make those deletes more frequently than the
> delays will get smaller, but then the number of those delayed requests
> will increase.
>
> What also helps a bit (currently have not seen a negative impact) is
> increasing the page size for the Derby Network Server:
> -Dderby.storage.pageSize=32768
>
> Regards
> Peter
>
>

Reply via email to