A 20MB file will certainly be slow to make into an XML tree, say 60 to
90 seconds on typical machine and around 300MB RAM useage.
I'm working with files up to around 30MB for some software of mine
thats in progress. I haven't tried to implement any better solution at
present - I'll be warning that they may need 1GB of RAM or
more..should generally be achievable for my target market. I'm
thinking of using the  'hasmemory' function to generate a memory
warning if necessary before creating the tree, although the
documentation says it may not return useful values. It seems to be too
conservative on windows at least

regards
Martin

On 11/05/07, Bill Marriott <[EMAIL PROTECTED]> wrote:
Klaus,

> does anyone have some experience/hints with/for the performance of  RevXML
> with big XML files? [...] we want to avoid using a "real" database engine.

Your biggest issue with "big" anything is memory usage, because the source
table and any transformations you do with it are stored in RAM. As your data
gets larger and larger, you run the risk of slowing down considerably or
even running out of it. Not everyone has a gigabyte or more to play with.

The advantage of using a "real" database engine, like SQLite integrated with
Rev 2.8.1, is that you'll be able to manipulate large tables with
high-performance SQL commands, keeping your memory footprint down to just
what is needed for on-screen display.

- Bill



_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to