On 16/02/2015, at 9:51 PM, Paul Royik <[email protected]> wrote:
> [Mon Feb 16 10:50:00.885860 2015] [wsgi:error] [pid 7129:tid 139757846304512]
> ('RUNNING GARBAGE COLLECTOR', 1424083800.88584)
...
> [Mon Feb 16 04:50:43.334295 2015] [wsgi:error] [pid 7129:tid 139757388543744]
> ('RUNNING GARBAGE COLLECTOR', 1424083843.33428)
> [Mon Feb 16 04:50:43.340423 2015] [wsgi:error] [pid 7129:tid 139757388543744]
> ('RUNNING GARBAGE COLLECTOR', 1424083843.340413)
> [Mon Feb 16 04:50:43.357043 2015] [wsgi:error] [pid 7129:tid 139757388543744]
> ('RUNNING GARBAGE COLLECTOR', 1424083843.357031)
So?
Great, it says the code does what it is was meant to.
But you seemed to miss the point of what it is meant to do.
Let your application keep running. If that output keeps getting displayed for
ever and ever then it means the garbage collector is still at least running,
If however that output stops coming out yet the process keeps running and
memory usage keeps growing then it means that the garbage collector stopped
running.
If it does continually keep coming out and memory keeps growing, then you know
at least the issue isn't the garbage collector getting stuck.
No that this will fill the logs pretty quick, so if memory usage looks to keep
growing and this is still coming out, take the code out again.
Explain properly what you saw happening in works.
So there is no point sending me the logs.
Graham
> On Monday, February 16, 2015 at 12:37:32 PM UTC+2, Graham Dumpleton wrote:
> Put this in your WSGI script file (wsgi.py).
>
> import time
> import threading
>
> class Monitor(object):
>
> initialized = False
> lock = threading.Lock()
>
> count = 0
>
> @classmethod
> def initialize(cls):
> with Monitor.lock:
> if not cls.initialized:
> cls.initialized = True
> cls.rollover()
>
> @staticmethod
> def rollover():
> print('RUNNING GARBAGE COLLECTOR', time.time())
>
> class Object(object):
> pass
>
> o1 = Object()
> o2 = Object()
>
> o1.o = o2
> o2.o = o1
>
> o1.t = Monitor()
>
> del o1
> del o2
>
> def __del__(self):
> global count
> Monitor.count += 1
> Monitor.rollover()
>
> Monitor.initialize()
>
> Then monitor the log file and see if it periodically outputs 'RUNNING GARBAGE
> COLLECTOR' or whether it stops being output after a while.
>
> Graham
>
> On 16/02/2015, at 9:29 PM, Paul Royik <[email protected]> wrote:
>
> I'm sorry. Django 1.7.1
>
> On Monday, February 16, 2015 at 12:19:33 PM UTC+2, Graham Dumpleton wrote:
> I asked what version of Django are you running? Not Python.
>
> Graham
>
> On 16/02/2015, at 9:12 PM, Paul Royik <[email protected]> wrote:
>
> I'm using Python 2.7.9
> So, what solution do you propose?
>
> Is there any way to kill a thread? Because things now are working worse than
> wih unstoppable thread. I'm hitting request-timeout.
> Also memory grows on every request.
> Maybe there is way to kill thread on external calculations?
>
> On Monday, February 16, 2015 at 12:04:23 PM UTC+2, Graham Dumpleton wrote:
> What version of Django are you running? Older Django versions have a bug in
> them which can cause the Python garbage collector to block and no longer run.
> Memory usage will go up because Python objects are not reclaimed properly.
> Not out of the question that other third party libraries could cause this
> also and can see a correlation between it and the fact that you are hitting
> queue time where request threads are blocking on a thread mutex.
>
> The MaxRequestWorkers warning is because you ran out of capacity in the
> daemon processes due to all your long running requests and/or hung requests.
> The fact you are hitting the request timeout means that the daemon process
> likely has stopped taking more requests and eventually as new requests back
> up, the Apache child worker process eventually complain as they also run out
> of capacity in their capacity of proxying requests.
>
> So it is an outcome of the problems you are having. You still need to work
> out the underlying problem.
>
> Graham
>
> On 16/02/2015, at 8:53 PM, Paul Royik <[email protected]> wrote:
>
> I also got following error.
> server reached MaxRequestWorkers setting, consider raising the
> MaxRequestWorkers setting
>
> On Sunday, February 15, 2015 at 10:42:19 PM UTC+2, Paul Royik wrote:
> As I discovered, memory grows on every request, not only calculator, even in
> admin. Situation is very close to this:
> http://stackoverflow.com/questions/2293333/django-memory-usage-going-up-with-every-request
> I hit 3 GB. It is the first time.
>
> On Sunday, February 15, 2015 at 1:59:06 PM UTC+2, Paul Royik wrote:
> It grows. Below is list between subsequent requests.
> 78988
> 85503
> 92873
> 100237
>
> On Sunday, February 15, 2015 at 1:36:15 PM UTC+2, Graham Dumpleton wrote:
> It being empty is fine.
>
> At least is not caused by uncollectable objects.
>
> Next would be to trying printing out periodically:
>
> len(gc.get_objects())
>
> and see if it grows over time.
>
> This is not conclusive either as meaning anything, but if it does keep
> growing, still useful to know.
>
> Graham
--
You received this message because you are subscribed to the Google Groups
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.