Hi,

I need to process xml files with the following basic structure that in the 
worst case will be a couple of GB in size and contain on the order of ten 
million "Thing" elements (each of which has a few child elements and 
attributes a couple of levels deep that I want handle using JiBX):

<Things>
  <Thing ... />
  <Thing ... />
  ...
</Things>

Obviously, this whole structure won't fit in memory. I suppose I could do my 
own parsing to consume the top-level element and then programatically invoke 
JiBX's unmarshalling for each Thing, as suggested here: 
http://www.mail-archive.com/jibx-users@lists.sourceforge.net/msg00535.html. 
However, that would mean invoking JiBX several million times, and I'm not 
sure what that would mean in terms of overhead. Does it make any sense to do 
it like that? Or is there some other way I could approach this? Performance 
is not a big issue, by the way. I'm mostly interested in doing some 
transformations that JiBX's mappings should be well suited for, and to avoid 
having to chop up the files manually.

Thanks in advance,
Lennart

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
jibx-users mailing list
jibx-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jibx-users

Reply via email to