Hi,

About concurrency: it is possible to extend the MVMap to support fully
concurrent operations. For example the R-tree implementation is an
extension of the MVMap. The map implementation used is pluggable. According
to my tests, for in-memory operations, the current MVMap implementation is
about as fast as a TreeMap, which is not synchronized and does not allow
concurrent writes. I guess a concurrent MVMap will be slower and more
complex, but let's see. There are multiple ways to support fully concurrent
operations, for example synchronize on each page, or use a read-only root
node. However I wonder if it's really such a common use case to update a
map concurrently, or concurrently write to and read from the same version
of a map. What exactly is your use case? How do you make sure the threads
don't overwrite each others changes (for concurrent writes)? Don't you need
some kind of isolation? For concurrent write to and read from the head,
don't you think it's a problem that reading will not be isolated?

NavigableMap: currently it's a java.util.Map, but yes all features of a
NavigableMap will be supported later on. Maybe this could be done in the
form of a wrapper or abstract base class, to keep the core engine small
(similar to java.util.AbstractMap).

SnapTree: I didn't know about this, I will have a look at it.

> You say that page size will be variable, will not it be harder to recover
corrupted databases because of this?

No. The file format is:

[fileHeader] [fileHeader] [chunk]*

The file header is stored (at least) twice for security. The size of the
file header the native file system block size, or a multiple of it. It
contains the pointer to the latest head (actually it will be the list of
latest heads). A chunk is:

[chunkHeader] [page]*

Chunks are also aligned to the file system block size, so a chunk occupies
at least one block (but should be around 2 MB or so if you care about write
performance). Each chunk header points to the root page of the meta map
within this chunk. The meta map contains the position of the root pages of
all other maps (beside other metadata). Pages don't need to be aligned,
because they are not overwritten. There are no in-place updates like in the
page store and in most databases. At startup, the recovery algorithm just
has to check which one is the latest valid chunk. The recover tool
(currently called Dump) reads chunks, thats also not a problem. Each page
does have a small page header and a checksum by the way, so data can be
recovered even if the chunk header and some of the pages are corrupt.

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to