Congratulations on the new site!
Thanks!
a. documentation on the outer limits of Priha's abilities,
particularly throughput performance on extremely large
numbers of very small XML files (potentially a scale of
100m records with high speed indexing via XML ID).
Would be interesting to see how fast it becomes... At the moment the
speed isn't that bad, and in theory reading should be O(1) [though
creation on the FileProvider is O(N) or worse]. But it depends a lot
on the DB you have under. There's currently a HSQLDB provider, but
tweaking it to run on MySQL should be pretty easy too.
b. development of a JSR-170 sub-interface so that we have
sense of what Priha implements.
Mmm... You could just run the JCR TCK and get it from there.
This in addition to my ongoing interest in Priha as a JSPWiki
backend, where I'm still hoping to see support for pluggable
metadata, since installations often have their own metadata
requirements beyond the rudimentary stuff required by the
wiki itself, such as needing to integrate into existing
enterprise architectures.
The way I've currently written it, a plugin has full access to the
entire JCR metadata - we just provide accessors for some most
commonly used metadata (like the content of the page as a String;
author of the page; version of the page).
I would also take a look at Jackrabbit. It has some known
scalability limits (if you put too many children in a single Node),
but at least it's well tested (which Priha isn't right now).
/Janne