Johan Stuyts wrote:
It does only 1 propfind (on the folder) after each added document. The result is then parsed, looping through all the documents' index properties the determine the highest number. Since one document is added at a time, that would be a linear time increase rather than exponential, wouldn't it?

No, it is exponential:
- noid (number of inserted documents)
- aerip (average entries returned in propfind) = noid / 2
- time needed = noid * aerip

Ok, ok... I shouldn't speak about matters I don't know enough about...  :-)

But my point was that it was no so bad as Jasha suggested (multiple propfinds per document).

I believe this classifies as O(n^2) and what you really want is O(n).
With the bulk operation you propose you need multiple operations per
document (intial store and update of the index afterwards) and this
still classifies as O(n).

You always need at least one PUT and one PROPPATCH per document, that's how webdav is designed. With the bulk idea you need one PROPFIND for the whole batch afterwards.

Maybe it would be a good idea to have an extractor in the repository which always sets the correct index for any document that is added?

Niels

********************************************
Hippocms-dev: Hippo CMS development public mailinglist

Reply via email to