Hi ranjan,
Not sure if it will help in your case, but I wrote this type of buffering
syntax to process large result sets to csv or html.
let $incr := 5000
let $size := xdmp:estimate(cts:search(doc(), $cq0, 'unfiltered'))
let $segs := ceiling($size div $incr) return
for $x in (1 to $segs)
let $start := (($x -1) * $incr) +1
let $end := $start + $incr -1
for $result in cts:search(doc(), $cq0, 'unfiltered')[$start to $end]
return
...
It seems that only the current $result set is kept in the tree cache rather
than the entire result
Gary
From: [email protected]
[mailto:[email protected]] On Behalf Of ranjan sarma
Sent: Monday, February 04, 2013 4:42 AM
To: [email protected]
Subject: [MarkLogic Dev General] regarding error: XDMP-EXPNTREECACHEFULL
Hi
I have a database which consists of above 20,000 documents, each of around
2-10 kilo bytes. I get the 20000 documents by following query:
let $uri := cts:uri-match('products/documents/*.xml')
let $doc := fn:doc ($uri)
products/documents contains all the xml documents. we need to build csv file
from this record set, with each xml documents being
one row in the csv file. but since the size of document is too large so we
are recieving an error message ' XDMP-EXPNTREECACHEFULL' (I think the
document was tried to store in main memory, which was not allowed by the
system).
What can be other work around? Can we add streaming of result ? If yes
please provide one example so that I can grab it.
Otherwise, Can we convert documents part by part to csv and then output to
HTTP output stream ?
Increasing the size of cache from admin console is not a solution because
the document size may grow in future.
thanks,
ranjan.
_______________________________________________
General mailing list
[email protected]
http://developer.marklogic.com/mailman/listinfo/general