Hi Mario,
i just wanted to know if it is also possible to load (and manipule)
> large files, which can be hundreds of megabytes, or even multiple
> gigabytes. I tried to load a file into sedna via php api (240 MB), and
> it took 463 seconds to import it (on win 64). I recognized that the
> se_sm.exe process was working all the time, but only using about 14MB of
> memory.
>
Yes, it's possible. See wikixmldb.org for example - 36Gb XML file. And works
pretty fine.
> How can the import be accelerated? Couldn't find anything in the
> documentation about that topic.
>
There are two things you can do:
1. Increase buffers (use -bufs-num SM's option -
http://modis.ispras.ru/sedna/adminguide/AdminGuidesu3.html#x7-130002.3.4).
Default value 100Mb is small for complex and big data.
2. Use log less mode (to turn it on in se_term, see LOG_LESS_MODE here
http://modis.ispras.ru/sedna/adminguide/AdminGuidesu4.html#x8-150002.4).
This option is also available via php API: SE OPTID LOGLESS.
> I also noticed that there is only one function for loading xml
> documents, and thats "sedna_load", where you need to pass the whole
> document as a string. Meaning it is necessary to parse the whole 240MB
> into a string (-> into memory), so that you can pass it to the function.
> Or is there a better way to do this?
>
No, AFAIR. You should use some other way (se_term, C API, Java API) to
perfrom initial bulk load of really big data.
Ivan Shcheklein,
Sedna Team
------------------------------------------------------------------------------
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3.
Spend less time writing and rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
_______________________________________________
Sedna-discussion mailing list
Sedna-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sedna-discussion