Michael's last example with spawning almost worked. The generated document
name for each record re-used the same table index, so I was left with only
45 documents in the end. I changed the XQuery to read:
(: query console :)
for $table in
xdmp:document-get('C:\Users\servicelogix\slx\slx.xml')/*/*/table_data
let $table-name := $table/@name/string()
for $row at $index in $table/row
let $record := element { $table-name } {
$row/field[text()]/element { @name } { text() } }
let $uri := concat('/slx/',$table-name,'/',$index)
return xdmp:spawn('document-insert.xqy', (xs:QName('URI'), $uri,
xs:QName('NEW'), $record))
The controller logic ran in 93 seconds. The spawned threads were done a
few seconds after. By default, there are 32 server threads configured on
my system which is a i5-2410M Processor (2-core, 4-thread) with a clock
speed of 2.3GHz. No re-configuration was done. The logic ran using the
MySQL datadump file from the file-system thus eliminating that initial
load-step using the Information Studio Flow. All looks good. Thank you
Michael for taking the time to write this example code.
Now on to phase two: de-normalizing the tables into documents.
_______________________________________________
General mailing list
[email protected]
http://developer.marklogic.com/mailman/listinfo/general