On 08/02/2021 18:43, Rick Hillegas wrote:
The MergeInserter is doing a merge sort because there is no usable
descending index on system_log.time. The storage layer's page cache (the
ConcurrentCache) is filling up because you have to fault-in the entire
contents of system_log. The logic in MergeInserter.insert() does not
seem to be smart enough to realize that it is close to exhausting memory
and needs to spill a merge run to disk.
I would recommend throwing more memory at your JVM or adding a
descending index to system_log.time.
Thanks for this.
I already have a descending index on time -- here is the full
declaration of system_log:
CREATE TABLE system_log (
id INTEGER GENERATED ALWAYS AS IDENTITY,
time TIMESTAMP DEFAULT NULL,
username VARCHAR(15),
name VARCHAR(520),
facility VARCHAR(15) NOT NULL,
event VARCHAR(31) NOT NULL,
sector VARCHAR(15),
item VARCHAR(255),
details VARCHAR(32000),
CONSTRAINT systemlog_pk PRIMARY KEY (id)
);
CREATE INDEX log_index ON system_log (time DESC);
Is there a way to check if the index is being used?
Alternatively, the id column is pretty much the same ordering as
descending time (insertions always use CURRENT_TIMESTAMP); I'm trying to
think of a way to special-case this one to order by id instead of time,
to see if that makes a difference. But since I allow the data to be
sorted on any column for display, it isn't going to help much.
Access to this particular table is slow anyway, which I had thought
might be due to locking issues (since anything that changes the system
state gets logged, as well as certain read requests). I'm using the
default isolation level and autocommit for insertions. Ideas on ways to
optimise this sort of use case would be welcome!
Meanwhile I'm bumping up the memory. The table is currently about
200,000 rows, so I assume that at some point when it gets bigger this
will just start happening again.
--
John English