There isn’t a chunking mechanism in XMLBeans, but I pretty sure you can write a Stax or SAX wrapper (Stax should be easier) to split the big stream into smaller ones, and you can load a small subtree into an XMLBean, one at a time.

 

Would this work for you? What kind of operations do you need to do on the huge file? Can they be made sequentially?

 

Cezar

 


From: John Harrison [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 09, 2006 1:34 AM
To: user@xmlbeans.apache.org
Subject: Parsing large file

 

Hi,
  I am investigating XML parsers for parsing huge(around 2GB XML file).
Each file can have around 500,00-750,000 objects.
Due to theses numbers it is not practicle to store everything in memory.
Does XMLBeans provides any facilities to parse such huge files?


Is there any thing that XMLBean provides to parse file in stages from IOStream?

Any input on this is much appreciated.

Regards
John.

_______________________________________________________________________
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.

Reply via email to