I'm curious to know why this might be, e.g., would this be true on
a RieserFS linux machine? Is this limitation based on the OS or on
the Priha implementation? If the latter, why? (I'd like to avoid
the same issues if I were to build a Berkeley DB backend).

This is actually a discussion better suited on the priha-dev mailing list :-).

Anyhoo, the reason why it does not currently scale is the fact that UUID mappings are O(N) (or might be O(N^2), haven't checked) because of a simplistic implementation. This is not a problem for the current JDBC implementation, because it stores UUIDs with the nodes themselves (and therefore can be fetched with a simple JOIN). With an index it's essentially an O(log N) op.

The FileProvider impl is fixable, once I (or someone else) starts thinking about it. Optimizing something like this is going to be pretty cool ;-). My stats are currently showing that 99% of the performance is in the storage layer, but the tests are quite simplistic. But working on this is quite a different beast from JSPWiki, where you need to understand quite a lot of code to help - for Priha, you can start working on really small bits and have a huge impact in the performance.

/Janne

Reply via email to