On 05/07/2012 05:37 PM, Jonathan Vanasco wrote:
- eventually i would refactor the code to use a SOA setup and have a
dedicated daemon handle the large stuff.


But doesn't that suffer from the same set of problems? If the (daemon) process persists in memory, it doesn't matter if it's the wsgi app or a SOA approach. Unless you put it on a different server, but it then saturates memory there instead of the wsgi app server.


I think overall only a handful of solutions are acceptable:

1. build XML in smaller chunks to a tempfile, yield it back to client
2. subprocess.Popen a script external to the persisted wsgi app
3. reload process if it exceeds certain memory threshold, or periodically after X requests


I also wonder what will happen when I get to PDF generation. I'll have to generate similarly large PDFs with potentially thousands of pages (prepress service). I have experience with ReportLab's tools, albeit for much smaller sets and it works well. At least with that I am sure I can produce smaller chunks and then concat them.





.oO V Oo.


--
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to