At 200Mb/330Mb parsing, i have released 200Mb of html comment nodes,
and should have accumulated only 200Mb of javascript array/object.
it's _just_ the data, no HTML has been generated yet.
I accept a 5x overhead for turning it into HTML, but wonder why
firefox
a) stops updating the screen despite regular .setTimeout() pauses
b) accumulates 1 gigabyte of memory after only 200Mb has been processed.

And i dont think paging is possible, as i want to display things like the path
(of parent nodes) and sibling list for each sub-array in the data.


On Thu, Jan 28, 2010 at 5:30 PM, Jochem Maas <joc...@iamjochem.com> wrote:
> just guessing but I doubt you have a real issue in your parser-decoder,
> the memory used by firefox seems reasonable to my untrained eye - I'd guess
> that a factor 5 memory overhead is normal given the amount of abstraction
> involved in the browser doing it's thing.
>
> pretty cool what your trying to do - but, totally nuts of course :)
>
> I would think that you're only recourse really is to chunk the output
> of both the server and the browser so that you can, theoretically, page
> through the data structure in the browser ... might be totally inpractical,
> if not impossible. not to mention you likely to have to use file-based storage
> for the chunks of output on the server side to avoid running out of mem.
>
> hard problem!
>
>>
>
>

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to