It hangs for your quick reply Alan. I saw a post by Aaron about a 1.3gb
file and looping through it to find the opening and closing nodes. Is this
what you mean by breaking the file up?

I have successfully parsed a 300mb file doing this method.

What sort of memory would you recommend to support the direct cffile option?

Thanks,

Lee

On Monday, 6 June 2016, Alan Williamson <[email protected]> wrote:

> Yeah .. this doesn't surprise me.
>
> Reading in 73MB of data into memory on an instance that tight, you will
> have problems.   And once you have it there, you will be having major
> issues with the XML parsing ... XML is a huge memory hog at the best of
> times, but since you are about to throw a 73MB packet at it .. you would
> have trouble doing that in pure Java on a micro-instance let alone the
> overhead that CFML will add.
>
> Try breaking the file up into smaller chunks and process bit at a time
>
> --
> --
> online documentation: http://openbd.org/manual/
> http://groups.google.com/group/openbd?hl=en
>
> --- You received this message because you are subscribed to a topic in the
> Google Groups "Open BlueDragon" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openbd/hXBVyeQWUAc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
online documentation: http://openbd.org/manual/
 http://groups.google.com/group/openbd?hl=en

--- 
You received this message because you are subscribed to the Google Groups "Open 
BlueDragon" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to