Loading xml in memory is not reasonable. Nmap can *really* output some
*really* verbose and *really* huge xml files, which we wouldn't ever
mind to have then loaded in memory. That's why we moved from dom
parsing to sax.

So, I'm not fully aware of how are things working on the proposed
umitdb, but it looks reasonable to mix the files with the database. We
could maybe keep a summary of the xml file content inside the
database, and decide to look inside the xml if the summary is
promissing based on the search term used.

Or, maybe I'm just wrong and we can keep the xml files inside the
database and search them from there directly.

Guilherme, do you remember what was the problems you were facing when
you decided to move the files outside the database? Maybe that could
help us make a better decision.


Cheers!

-- 
Adriano Monteiro Marques

http://adriano-marques.blogspot.com
http://umit.sourceforge.net
[EMAIL PROTECTED]

"Don't stay in bed, unless you can make money in bed." - George Burns

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Umit-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/umit-devel

Reply via email to