The de-normalization phase will happen in a subsequent pass across the
data. Now I'm just trying to get the chunking and refactor of
names accomplished in this first pass.
I put the original MySQL datadump into the database so that I could perform
queries against it. It only took 31 seconds to load. So unless this
statement is streaming:
for $table at $index in xdmp:document-get('/tmp/export.xml')/*/*/table_data
I would think staging the document in the database would be safer. I'm
going to try your suggestion and see.
I had just seen an email thread that spoke to spawning but I haven't yet
implemented it. I was looking to see if there was a transaction commit
that i could do within the XQuery for loop so that the transaction size
wouldn't get too large. I'm still digesting the difference between
single-statment and multiple-statement transaction types. I'll try this
spawning strategy instead.
Thanks for these suggestions,
Todd
_______________________________________________
General mailing list
[email protected]
http://developer.marklogic.com/mailman/listinfo/general