Hi,
I did not clear the tables before working, but the repository was quite small before. When I did a second try, the repository was empty, and I stopped after 2500 nodes were created, and my default_node table was already 280 MB...
So far I couldn't reproduce the problem. The size (280 MB) could have another reason, maybe the database does not re-use empty space for some reason. When I had a similar problem, I also had a really large database (about 1 GB), but after compressing it was only 10 MB or so. It was not MySQL. My suggestion is: - Before running the test, clean the database - After the test, display the number of rows in the database, and the size Without reproducible test case, it is hard to find the problem. If the size reproducibly grows much faster than the number of rows, I would be interested in finding out why, or finding a workaround.
same name siblings is something we like.
You don't really need to filter on the node name. You could filter on the node type, and additionally on the path. I am not an expert for JCR queries, here is what I came up with. You need to replace nt:base with the node type you use: Query q = manager.createQuery( "//contractors/element(*, nt:base)[EMAIL PROTECTED]" + id + "]", Query.XPATH); Query q = manager.createQuery( "SELECT * FROM nt:base WHERE jcr:path LIKE '/contractors/%' AND id=" + id, Query.SQL);
Creating deeper hierarchy is a possibility but would increase persistence services complexity.
I was told that a few thousand child nodes is not a problem, but if you expect 30000 or more, then you should consider using a deeper hierarchy (with the current Jackrabbit) because there is a performance degradation. Thomas
