Hi all
I've talked before on how we restructured our data to drastically improve
search times on a 500 million token corpus. [1] Now, after some minor
improvements, I am trying to import the generated XML files into BaseX. The
result would be 100,00s to millions of BaseX databases - as we expec
Hi Bram,
not being much into the issue of creating databases at this scale I'm
not sure whether the OOM problems you are facing are related to Basex of
JVM actually.
Anyway something rather simple you could try is to behave "in between".
Instead of opening a single session for the create statem
Hello,
I'm developing an application server with node.js. When the server
starts, it creates a single connection to a BaseX database, to allow
subsequent user queries to the database.
I notice that when I connect separately to the BaseX database with the
BaseXClient command line tool, the app
Hello there,
I am trying to parse a large CSV file into XML by the following xquery:
let $file := fetch:text("D:\BPLAN\tlk.txt")
let $convert := csv:parse($file, map { 'header' : true(), 'separator' :
'tab'})
return fn:put(
{for $row in $convert/csv/record
return {$row/*}
},
"D:\BPLAN\
4 matches
Mail list logo