Hi Edgar,
I was away for 3 weeks at the army so sorry for the late reply.
The main problem to store the itemstates in a complex schema is the Collection handling. Since Collection fields changes are not logged into add/update/remove aware objects, all the elements in the Collection must be stored on each write call. It causes a hit on performance when handling collections with lots of elements, even with the simple PMs included in the core.
Actually there is a way to do that, and that is why I had custom implementations of the NodeState and PropertyState, so that I could use add/update/remove aware objects. Both Hibernate and OJB do this differently, but if implemented correctly, you do not have to rewrite the whole collection all the time.
But I ran into trouble because I had to copy the data between the original item state and my internal objects. That is why the implementation is so complex. If I could have re-used the objects as-is I would have had this problem, but this way not possible because I needed to modify the collection implementations. Maybe there is a way to do this using aspects, but this would complicate things even further.
With the hindsight, for high-performance, transaction-aware and cluster compliant, there is no perfect solution. I don't really like the file-system BLOB solution because it causes problems with replication. The RMI-cluster solution is interesting, but I worry about connection/disconnection problems. The full database implementation causes performance problems, especially for binary data. Basically what this means is that we are implementing some sort of clustered file-system, that supports transactions and is as high-performance as possible.
Regards, Serge Huber.
