Hi Nedim,

Maximum data which is handled by Sedna now (and we know about it) is the
Wikipedia dump in the wikixmldb.org demo.

Some statistics:

1. Descriptive schema size ~100K.
2. Raw data: 45GB, loaded data ~ 200GB.
3. Wikixmldb.org works pretty well on quite complex queries (with help of
indices) using 3GB database buffers.

We didn't have experience with millions descriptive schema nodes in data.
It should work, though I can't say anything about performance or database
size. I would recommend you to try to upload your current data in Sedna.
It's very easy to try. Let us know if you need some help.

BTW, better to use the latest development build:

http://modis.ispras.ru/FTPContent/sedna/development

Ivan Shcheklein,
Sedna Team
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Sedna-discussion mailing list
Sedna-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sedna-discussion

Reply via email to