On 13 Aug 2010, at 12:22, Alex Povolotsky wrote:
I'm working on processing relatively big (10+Mb) XML files, and it
seems to me that Catalyst is taking an awful lot of time on some
internal processing before call to handler, using surprisingly 200
MBs of RAM (about 40 MBs before request)
Can I somehow improve Catalyst's performance?
Vague question is vague.
I very much doubt that this is a Catalyst issue. You know that
Catalyst log lines are batched till the end of request by default,
right?
I think that you'll find that you are sucking 10Mb+ of XML into perl
(using _significantly_ more memory for a large scalar), and then
munging over it in some way to produce a huge data structure (which is
also in RAM).
So, this is nothing to do with Catalyst - if you load a massive scalar
into RAM, and then parse that into a massive data structure, it's
going to use a lot of RAM, full stop...
Maybe a streaming (SAX parser?) based approach would be better so that
you never read the entire file into RAM.
Cheers
t0m
_______________________________________________
List: [email protected]
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/[email protected]/
Dev site: http://dev.catalyst.perl.org/