(self.name, content)
>> File "/usr/local/lib/python2.7/dist-packages/asgi_redis/core.py", line
>> 184, in send
>> raise self.ChannelFull
>> asgiref.base_layer.ChannelFull
>> 4.) Running 28 workers on 10 cores.
>> 5.) For very less number of concurrent users,
10 cores.
> 5.) For very less number of concurrent users, the configuration is running
> fine.But as already said, I did a load testing on tornado server (4Cores)
> and was running fine, but with django-daphne i am giving 10Cores to workers
> and still facing 503 error code and 504 err
ss number of concurrent users, the configuration is running
fine.But as already said, I did a load testing on tornado server (4Cores)
and was running fine, but with django-daphne i am giving 10Cores to workers
and still facing 503 error code and 504 error code by haproxy.May be I
misunderstood some
For those who have same CPU usage issue — my mistake was in isage
RedisLocalChannelLayer in combo with delay server.
Looks like many delayed messages just hangs on nodes. I've thought it
should be executed on same node, but seems partially it was consumed, but
partially hangs.
Then i've tried
Thanks for suggestions but i have found nothing suspicious.
But i've installed pyinotify and switch to running workers in separate
processes manually instead of using --threads option.
Not sure which of this actions helps more but now CPU usage floating around
15-25%
On Wednesday, May 10,
On Tuesday 09 May 2017 23:21:23 qnub wrote:
> Also it's may be my fault somewhere (and i pretty sure it is). But i
> not sure where to start my investigation.
I would start with strace[1] - a common cause for this is expecting a
resource that does not exist and keep trying. A filename or
Thanks for answer!
python manage.py runworker --threads=4
This process consume CPU resources (daphne and delay processes seems not
meaningful here).
I'll try describe step by step:
1. daphne, workers and delay started and workers consume about 10% CPU
2. open new tab in chrome and worker's
Daphne does tend to idle hot, but this is so it performs better under high
load. It's not clear from your description which of the processes is using
more CPU as connections come through and then disconnect - is it Daphne or
is it runworker?
Andrew
On Mon, May 8, 2017 at 5:46 AM, qnub
We have executed cluster of 3 docker containers (on separated machines)
with daphne and workers --threads=4 in each container. And this container
consube about 15% machine's CPU after start. Then it start consume
addtional 4-5% of CPU per new connection and seems like not free this
resources
9 matches
Mail list logo