Hi everyone,

maybe someone can shed some light on this.

I deployed a TG1 app using supervise (daemontools) and nginx as load balancer.
This may be a python specific thing, but I'm just curious:
When I run 3 server instances (cherrypy servers) under one supervise process, 
each of the three processes uses about 1GB of memory (in my case)
All the processes share the same database pool (which is kind of curious).
So if I have prod.cfg, prod1.cfg and prod2.cfg each configured to a poolsize 
of 20, the total number of database backends is 20 and not 60 as one would 
expect.

Today I had a couple of issues and I changed the deployment. All I did is 
start the 3 server instances with a supervise process each. So now they're 
completely independent from each other.
The result is that suddenly the memory consumption of each process is only 
around 100MB - roughly a 1/10th from before.
Also now I see 60 database connections being used.

I'm struggling to explain the difference and was hoping that someone with more 
knowledge about the intrinsics can shed some light on this weird behavior.
Does it have to do with the global interpreter lock? Something about parent 
processes and controlling ttys?

I'm pleased the processes don't chew up as much memory as before.  The system 
seems a tad bit slower than before (surprisingly), but I haven't tested that 
well enough yet to make a relevant assessment (could also be my own 
provider...)

Anyways, I'd die to know why the memory consumption suddenly drops to 1/10th 
when the parent processes of each interpreter are different.

Thanks

Uwe

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/turbogears?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to