Two thoughts:

1) It's a known issue (severe weakness) in the design of Jackrabbit/Oak
that it chokes like a dog on large numbers of child nodes all under the
same node. Many users have struggled with this, and imo it has been one of
the massive flaws that has kept the JCR from really taking off. I mean,
probably still only 1% of developers have ever heard of the JCR.

2) About cleaning up the massive child list, be sure you aren't doing a
commit (save) after each node. Try to run commits after 100 to 500 deletes
at a time.

Good luck. That scalability issue is a pretty big problem. I sure wish
Adobe would find some people with the requisite skill to get that fixed.
Every serious user runs into this problem. I mean the Derby DB is
litterally 100x of times more powerful, and most people consider Derby a

Best regards,
Clay Ferguson

On Sun, Aug 6, 2017 at 7:38 PM, Peter Harrison <> wrote:

> Over the last few days I've come across a problem while trying to recover
> from a ranaway script that created tens of thousands of nodes under a
> single node.
> When I get the parent node to this large number of new nodes and call
> hasNodes() things lock up and the Mongo query times out. Similar problem
> when you try to call getNodes() to return a nodeIterator.
> I know that one of the key points with Oak was meant to be the ability to
> handle a large number of child nodes,
> The second problem I have is in removing these nodes. While I was able to
> find out the node paths without the above calls to get each node by path
> when I call node.remove() it is taking about 20-30 seconds to delete each
> node. I wanted to remove about 300,000 nodes, but at 20 seconds a node....
> just under 69 days. It took no more than 2 days to add then, probably much
> shorter.
> While I'm working on ways around these problems - essentially by rebuilding
> the repo - it would be good to see if these problems are known or whether
> there is something I'm doing wrong.

Reply via email to