> As Noel suggest, you could split the table in two: one only contains the
> columns that are used a lot, and the other table for the remaining columns.

Yes, I've thought about that solution. I even thought about
possibility to store the data as one-columnt-per-table. But I'm not
sure about performance of left joins on 20-30 tables.

And my 2 cents on lazy fields initialization. I'm thinking about
simplier solution:
on our data set when reading almost all time is spent on converting to
string (ValueString.get(readString())). What about implementing a
LazyValue, which is initialized with pointers to the page (should be
possible if pages are immutable, of course), and start position.
Method `org.h2.result.Row.getValue` can initialize it just in time and
replace a lazy value with a real one. I think, it could give some
benefits not only for indexing large volumes of data, but for updates
when most fields of a row are unchanged and the h2 engine can reuse
binary representation of them without instantiating the real objects.

Also, as I mentioned those batch inserts are implemented using a lot
of single insert statements. Is it possible to leveradge the batch?
For example, to write a log record for the batch at once?
Also, afair, databases along with providing smth like `WRITE_DELAY`,
have a parameter like the maximum size of not flushed log. I believe,
It's one page in H2 now. I think, writing of several pages at once is
faster then one by one.
Also, I wonder why temporary files for large selects (>
MAX_MEMORY_ROWS) and undo records are always created in a DB
directory? Would it make sense to place such files onto a fast storage
(like tmpfs)?

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to