Marcel Reutegger wrote:
Christoph Kiehl wrote:
Ok. To get this working, you have to create at least one segment per
transaction, right?
not necessarily. as an optimization the current implementation uses the
redo.log to keep track of index modifications that were only done in
memory. this means that at the end of a transaction there won't
necessarily be a new index segment on disk.
But isn't it necessary for the index data to be committed to the database/pm to
get a transactional index? I mean if you commit the index changes from the
redo.log in a new transaction you don't really gain anything compared to the
current solution regarding transactional index behavior, do you?
And index merging could be done in background?
index merging *is* already done in the background.
Yes, of course.
Sounds really interesting. But if the blob values are cached locally
they have to be downloaded on startup first before the index starts to
be fast.
correct.
Hm, for our case this would mean to download about 10GB on each restart :( Might
take a while ;)
Cheers,
Christoph