Hi,

Could you send the configuration (repository.xml file), and the code
if possible (so I don't have to write it again). Just recently I
though I saw a similar problem, but I am not sure if it's related.

Thanks,
Thomas


On 6/20/07, Frédéric Esnault <[EMAIL PROTECTED]> wrote:
Hello there !



It seems to me that there is a storage problem, when you create a lot of nodes, 
one by one, using this algorithm :

1.      for each node to create

        a.      create node
        b.      fill node properties/child nodes
        c.      save session

2.      end for



The default_node and default_prop tables number of rows (and size) increases 
very fast, and in an unacceptable way.

I had a 35 million default_node table after inserting like this 27 000 nodes in 
a repository.



Then I used the other algorithm :

1.      for each node to create

        a.      create node
        b.      fill node properties/child nodes

2.      end for
3.      save session



And this gives a much better situation (currently I have a 36 000 content 
repository, and my tables are correct - 60 000 rows for node table,

576 000 rows for properties).



The problem here is that in a production environment, users are going to create 
their nodes one by one, day after day, never by full blocks.

So is there a storage problem ?



Frederic Esnault


Reply via email to