No, more precisely I don't know.
What I see is steady memory growth with small decrease, like 200 240 235 
260 280
At this point, as goolge analytics says, I have nearly 3-5 users, but they 
access static pages, not calculator.

On localhost in windows manager I see steady memory increase from request 
to request in process python32.exe and request can be any: calculator, 
admin, static page.
What I can guess is either some thread is running or just memory not 
released.

On Monday, February 16, 2015 at 1:33:30 PM UTC+2, Graham Dumpleton wrote:
>
> Please confirm what you are saying.
>
> That is, you say that if you fire a single request at your web site and 
> when it is finished successfully, you do not send any more requests, that 
> memory still grows in the interval between requests.
>
> Or, is what you really mean is that while the server is idle memory stays 
> the same, but if more and more requests arrive that memory then only keep 
> growing.
>
> Graham
>
> On 16/02/2015, at 10:26 PM, Paul Royik <[email protected] <javascript:>> 
> wrote:
>
> Between requests I can confirm that there was no action.
> Sorry, I don't know about threads, just see memory increase.
> I use the latest code, i.e. the code with time limit. I'm not using the 
> very first code with unstoppable thread right now.
>
> On Monday, February 16, 2015 at 1:21:12 PM UTC+2, Graham Dumpleton wrote:
>
> It will not output anything between web requests. It relies on some Python 
> code actually being run. How often it will be output depends on how much 
> activity is going on in the Python code around creation and release of 
> Python objects.
>
> Can you confirm whether between those two points there were no actual web 
> requests occurring?
>
> Now when you say 'although threads are there and memory is growing', how 
> are you determining that threads are still there?
>
> Aren't you using code at the moment which isn't creating background 
> threads but is trying to do the work in the web request by checking in the 
> algorithm whether too much time has gone past.
>
> Or have you changed everything back to spawning your own threads again and 
> not said you had done that?
>
> Graham
>
> On 16/02/2015, at 10:10 PM, Paul Royik <[email protected]> wrote:
>
> It runs while page is loading.
> When page is loaded it stops running (although threads are there and 
> memory is growing)
> Look at this log
> [Mon Feb 16 04:50:43.357043 2015] [wsgi:error] [pid 7129:tid 
> 139757388543744] ('RUNNING GARBAGE COLLECTOR', 1424083843.357031)
> [Mon Feb 16 04:53:33.503189 2015] [wsgi:error] [pid 7129:tid 
> 139757545793280] ('RUNNING GARBAGE COLLECTOR', 1424084013.50317)
>
> 3 minutes of no collection.
>
> On Monday, February 16, 2015 at 1:02:36 PM UTC+2, Graham Dumpleton wrote:
>
>
> On 16/02/2015, at 9:51 PM, Paul Royik <[email protected]> wrote:
>
> [Mon Feb 16 10:50:00.885860 2015] [wsgi:error] [pid 7129:tid 
> 139757846304512] ('RUNNING GARBAGE COLLECTOR', 1424083800.88584)
>
> ...
>
> [Mon Feb 16 04:50:43.334295 2015] [wsgi:error] [pid 7129:tid 
> 139757388543744] ('RUNNING GARBAGE COLLECTOR', 1424083843.33428)
> [Mon Feb 16 04:50:43.340423 2015] [wsgi:error] [pid 7129:tid 
> 139757388543744] ('RUNNING GARBAGE COLLECTOR', 1424083843.340413)
> [Mon Feb 16 04:50:43.357043 2015] [wsgi:error] [pid 7129:tid 
> 139757388543744] ('RUNNING GARBAGE COLLECTOR', 1424083843.357031)
>
>
> So?
>
> Great, it says the code does what it is was meant to.
>
> But you seemed to miss the point of what it is meant to do.
>
> Let your application keep running. If that output keeps getting displayed 
> for ever and ever then it means the garbage collector is still at least 
> running,
>
> If however that output stops coming out yet the process keeps running and 
> memory usage keeps growing then it means that the garbage collector stopped 
> running.
>
> If it does continually keep coming out and memory keeps growing, then you 
> know at least the issue isn't the garbage collector getting stuck.
>
> No that this will fill the logs pretty quick, so if memory usage looks to 
> keep growing and this is still coming out, take the code out again.
>
> Explain properly what you saw happening in works. 
>
> So there is no point sending me the logs.
>
> Graham
>
> On Monday, February 16, 2015 at 12:37:32 PM UTC+2, Graham Dumpleton wrote:
>
> Put this in your WSGI script file (wsgi.py).
>
> import time
> import threading
>
> class Monitor(object):
>
>     initialized = False
>     lock = threading.Lock()
>
>     count = 0
>
>     @classmethod
>     def initialize(cls):
>         with Monitor.lock:
>             if not cls.initialized:
>                 cls.initialized = True
>                 cls.rollover()
>
>     @staticmethod
>     def rollover():
>         print('RUNNING GARBAGE COLLECTOR', time.time())
>
>         class Object(object):
>             pass
>
>         o1 = Object()
>         o2 = Object()
>
>         o1.o = o2
>         o2.o = o1
>
>         o1.t = Monitor()
>
>         del o1
>         del o2
>
>     def __del__(self):
>         global count
>         Monitor.count += 1
>         Monitor.rollover()
>
> Monitor.initialize()
>
> Then monitor the log file and see if it periodically outputs 'RUNNING 
> GARBAGE COLLECTOR' or whether it stops being output after a while.
>
> Graham
>
> On 16/02/2015, at 9:29 PM, Paul Royik <[email protected]> wrote:
>
> I'm sorry. Django 1.7.1
>
> On Monday, February 16, 2015 at 12:19:33 PM UTC+2, Graham Dumpleton wrote:
>
> I asked what version of Django are you running? Not Python.
>
> Graham
>
> On 16/02/2015, at 9:12 PM, Paul Royik <[email protected]> wrote:
>
> I'm using Python 2.7.9
> So, what solution do you propose?
>
> Is there any way to kill a thread? Because things now are working worse 
> than wih unstoppable thread. I'm hitting request-timeout.
> Also memory grows on every request.
> Maybe there is way to kill thread on external calculations?
>
> On Monday, February 16, 2015 at 12:04:23 PM UTC+2, Graham Dumpleton wrote:
>
> What version of Django are you running? Older Django versions have a bug 
> in them which can cause the Python garbage collector to block and no longer 
> run. Memory usage will go up because Python objects are not reclaimed 
> properly. Not out of the question that other third party libraries could 
> cause this also and can see a correlation between it and the fact that you 
> are hitting queue time where request threads are blocking on a thread mutex.
>
> The MaxRequestWorkers warning is because you ran out of capacity in the 
> daemon processes due to all your long running requests and/or hung 
> requests. The fact you are hitting the request timeout means that the 
> daemon process likely has stopped taking more requests and eventually as 
> new requests back up, the Apache child worker process eventually complain 
> as they also run out of capacity in their capacity of proxying requests.
>
> So it is an outcome of the problems you are having. You still need to work 
> out the underlying problem.
>
> Graham
>
> On 16/02/2015, at 8:53 PM, Paul Royik <[email protected]> wrote:
>
> I also got following error.
> server reached MaxRequestWorkers setting, consider raising the 
> MaxRequestWorkers setting
>
> On Sunday, February 15, 2015 at 10:42:19 PM UTC+2, Paul Royik wrote:
>
> As I discovered, memory grows on every request, not only calculator, even 
> in admin. Situation is very close to this: 
>
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to