Re: [uWSGI] Help

2019-03-04 Thread Tamer Higazi

okay, I didn't see the AWS term. had tomatos on my eyes.
Sorry.

You have a dual core CPU (perhaps 32 bit even) and then with 4GB RAM.


Main problem!? Really!?

He's on AWS of couse it's 64bits, 2 CPU cores is a LOT of power for more
than 20 users, I serve 50 concurent user on AWS with a SINGLE core,
and 1GB of RAM, that doesn't scratch 1% of CPU usage and it is writting to
Postgres DB all the time, surely I don't use python/flask but 4GB is more than
enough for that.


ahh. that is of course one other thing.

I apologize.


Now for the log error message that is a bit unclear what happens,
trying to mimic the problem with 'wrk' and perhaps a simple app
to reproduce would help better.


Install terminator, split the screen

one for:
tail -f /var/log/myapp.log

the other for:
top

to see how the performance and at the point of crash is written

I would advise that.


best, Tamer

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help

2019-03-03 Thread Daniel Nicoletti
Em sáb, 2 de mar de 2019 às 10:58, Tamer Higazi
 escreveu:
>
> Dear Leo,
>
> The main problem is that the hardware already is with very poor specs.
> You have a dual core CPU (perhaps 32 bit even) and then with 4GB RAM.
Main problem!? Really!?

He's on AWS of couse it's 64bits, 2 CPU cores is a LOT of power for more
than 20 users, I serve 50 concurent user on AWS with a SINGLE core,
and 1GB of RAM, that doesn't scratch 1% of CPU usage and it is writting to
Postgres DB all the time, surely I don't use python/flask but 4GB is more than
enough for that.

>
> you want to provide services and even like to earn money with it, and
> start saving money on hardware.
> Why this nonsense ? Nobody said that you have to buy HIGH-END Hardware,
> but get the requirements 1st before doing anything.
Sorry non-sense is caring about hardware for just 20 users...

>
> Have you turned on logging and see what had been written inside that
> causes the crash?
> What is written in the logs ?
> Have you opened a shell and executed "top" to see what ressources are
> consumed ?
>
> Serving long term connections is also no problem.
> I have deald with websockets connections with written Python Stack and
> nginx as backend without any problems at all.
>
> And why using flask web framework doing rest calls ?
> Take Twisted. for example, or something totally small:
> http://docs.python-eve.org/en/latest/
>
>
> best, Tamer

Now for the log error message that is a bit unclear what happens,
trying to mimic the problem with 'wrk' and perhaps a simple app
to reproduce would help better.


>
>
> PS: best is to answer to the list and not taking individual addresses in
> CC like everybody else.
>
>
> On 02.03.19 14:40, Léo El Amri wrote:
> > On 02/03/2019 14:04, Tamer Higazi wrote:
> >> 2. And with your comment "So please help to scale the application for
> >> concurrent users."
> >>
> >> is very unpolite.
> > I think it was just badly written english. I don't think they meant to
> > be unpolite.
> ___
> uWSGI mailing list
> uWSGI@lists.unbit.it
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi



-- 
Daniel Nicoletti

KDE Developer - http://dantti.wordpress.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help

2019-03-02 Thread Léo El Amri
Tamer,

On 02/03/2019 14:58, Tamer Higazi wrote:
> The main problem is that the hardware already is with very poor specs.
> You have a dual core CPU (perhaps 32 bit even) and then with 4GB RAM.

I think a two cores 4GB RAM server, as long as it is not running on
5MHz cpus should be more than enougth to serve 20 simultaneous clients
without causing them to timeout.

> you want to provide services and even like to earn money with it, and
> start saving money on hardware.
> Why this nonsense ? Nobody said that you have to buy HIGH-END Hardware,
> but get the requirements 1st before doing anything.

Nobody said that

> Serving long term connections is also no problem.
> I have deald with websockets connections with written Python Stack and
> nginx as backend without any problems at all.

Keeping an HTTP stream for a long time IS a problem. This have to be
handled with special care since the naive implementation is soliciting a
complete thread or process. This is what we were sometimes using at the
time of Ajax (HTTP long polling).

WebSockets IS NOT HTTP long polling. WebSockets is effectively
negociating a protocol switch (named "upgrade" in the RFC) using HTTP,
then it drops HTTP semantics. Most of the libraries handling WebSockets
properly handle resource sharing (And (regarding Python) often force the
allocation of a separate thread/process).

IIRC uWSGI itself can handle WebSockets connections and is allocating an
entire thread or process for this.

> PS: best is to answer to the list and not taking individual addresses in
> CC like everybody else.

I'm always answering to the list and the targets of my messages. AFAIK,
this is not against the rules here (Tell me if I'm wrong).

-- 
Cordially,
Léo
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help

2019-03-02 Thread Tamer Higazi

Dear Leo,

The main problem is that the hardware already is with very poor specs.
You have a dual core CPU (perhaps 32 bit even) and then with 4GB RAM.

you want to provide services and even like to earn money with it, and 
start saving money on hardware.
Why this nonsense ? Nobody said that you have to buy HIGH-END Hardware, 
but get the requirements 1st before doing anything.


Have you turned on logging and see what had been written inside that 
causes the crash?

What is written in the logs ?
Have you opened a shell and executed "top" to see what ressources are 
consumed ?


Serving long term connections is also no problem.
I have deald with websockets connections with written Python Stack and 
nginx as backend without any problems at all.


And why using flask web framework doing rest calls ?
Take Twisted. for example, or something totally small: 
http://docs.python-eve.org/en/latest/



best, Tamer


PS: best is to answer to the list and not taking individual addresses in 
CC like everybody else.



On 02.03.19 14:40, Léo El Amri wrote:

On 02/03/2019 14:04, Tamer Higazi wrote:

2. And with your comment "So please help to scale the application for
concurrent users."

is very unpolite.

I think it was just badly written english. I don't think they meant to
be unpolite.

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help

2019-03-02 Thread Léo El Amri
On 02/03/2019 14:04, Tamer Higazi wrote:
> 2. And with your comment "So please help to scale the application for
> concurrent users."
>
> is very unpolite.

I think it was just badly written english. I don't think they meant to
be unpolite.

On 02/03/2019 14:04, Tamer Higazi wrote:
> 1. Get yourself propper hardware, that would solve perhaps by 80% your
> problems.
>
> Here is a good starting point:
>
> http://nginx.org/en/docs/http/load_balancing.html
> https://uwsgi-docs.readthedocs.io/en/latest/Broodlord.html
> https://uwsgi-docs.readthedocs.io/en/latest/Fastrouter.html
>
> On 01.03.19 07:27, Ashraf Mohamed wrote:
>>
>> I have a flask application which is running in nginx server and i am
>> unbale to serve the application for more then 20 users (concurrently)
>> as its gets break.
>>
>> *_APP Architecture:_*
>> I have 2 application running on different servers app 1(using for
>> frontend ) and app 2( using for REST API Calls) both are flask
>> applications

I highly suspect the application is deadlocking on itself. I don't
recommend building a web application that keeps the HTTP stream open
while doing a long job. You should pass the job to a worker in backend
and answer to the client right away. Then the client should poll on
another endpoint regularly to check on what's the job status.

-- 
Cordially,
Léo
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help

2019-03-02 Thread Tamer Higazi

Dear Ashraf,

1. Get yourself propper hardware, that would solve perhaps by 80% your 
problems.


2. And with your comment "So please help to scale the application for 
concurrent users."


is very unpolite.

Nobody has to help you, and people in the open source world don't make 
support for commercial vendors unless they are PAID for it.
If you want to get it run NOW, then look for a company that workout your 
problems, and make a deal with them.


Otherwise, you are old enough to workout the know-how yourself.

Here is a good starting point:

http://nginx.org/en/docs/http/load_balancing.html
https://uwsgi-docs.readthedocs.io/en/latest/Broodlord.html
https://uwsgi-docs.readthedocs.io/en/latest/Fastrouter.html

If you have worked out yourself things, and don't get it run and ask 
"GENTLY" what you have done wrong, then you get help.



best, Tamer


On 01.03.19 07:27, Ashraf Mohamed wrote:

Hi ,

I have a flask application which is running in nginx server and i am 
unbale to serve the application for more then 20 users (concurrently) 
as its gets break.


*_Error:_*
app: 0|req: 1/35] x.x.x.x () {44 vars in 5149 bytes} [Thu Feb  7 
14:01:42 2019] GET /url/edit/7e08e5c4-11cf-485b-9b05-823fd4006a60 => 
generated 0 bytes in 69000 msecs (HTTP/2.0 200) 4 headers in 0 bytes 
(1 switches on core 0)

*
*
*_OS version:_*
ubuntu 16.04 (aws)

*_CPU:_*
2 Core with 4 GB RAM

*_WebServer:_*
nginx version: nginx/1.15.0

*_APP Architecture:_*
I have 2 application running on different servers app 1(using for 
frontend ) and app 2( using for REST API Calls) both are flask 
applications


*
*
*_app1 uWSGI config :_*
*
*
[uwsgi]
module = wsgi
master = true
processes = 3
socket = app.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = test.log
buffer-size=7765535
worker-reload-mercy = 240
thunder-lock = true
async=10
ugreen
listen = 950
enable-threads= True

*_app 1 nginx config_*
*_
_*

user  root;
worker_processes  5;
events {
    worker_connections  4000;
}
http {
    server {
       limit_req zone=mylimit burst=20 nodelay;
       limit_req_status 444;
        listen 80 backlog=1000;
         listen [::]:80;
        server_name domain name;
        location /static {
           alias /home/ubuntu/flaskapp/app/static;
        }
        location / {
            include uwsgi_params;
uwsgi_read_timeout 120;
client_max_body_size 1000M;
          uwsgi_pass unix:///home/ubuntu/flaskapp/app.sock;
       }

    }

}


*_app 2 uWsgi config:_*

[uwsgi]
module = wsgi
master = true
processes = 5
socket = app2.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = sptms.log
async = 10
ugreen
worker-reload-mercy = 240
enable-threads = true
thunder-lock = true
listen=2000
buffer-size=65535
no-defer-accept=true
stats=stats.sock
memory-report = true

*_app 2 nginx config :_*
*_
_*
worker_processes  1;
events {
    worker_connections  1024;
}
http {
access_log /var/log/nginx/access.log;
proxy_connect_timeout 2000;
proxy_read_timeout 2000;
fastcgi_read_timeout 2000;
error_log /var/log/nginx/error.log info;
    include       mime.types;
    gzip on;
    server {
        listen 80 backlog=2048;
        server_name x.x.x.x;
        location / {
            include uwsgi_params;
            uwsgi_pass unix:///home/ubuntu/app/app2.sock;
#keepalive_timeout 155s;
        }
    }
}


So please help to scale the application for concurrent users.


Thanks
Ashraf

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] Help

2019-02-28 Thread Ashraf Mohamed
Hi ,

I have a flask application which is running in nginx server and i am unbale
to serve the application for more then 20 users (concurrently) as its gets
break.

*Error:*
app: 0|req: 1/35] x.x.x.x () {44 vars in 5149 bytes} [Thu Feb  7 14:01:42
2019] GET /url/edit/7e08e5c4-11cf-485b-9b05-823fd4006a60 => generated 0
bytes in 69000 msecs (HTTP/2.0 200) 4 headers in 0 bytes (1 switches on
core 0)

*OS version:*
ubuntu 16.04 (aws)

*CPU:*
2 Core with 4 GB RAM

*WebServer:*
nginx version: nginx/1.15.0

*APP Architecture:*
I have 2 application running on different servers app 1(using for frontend
) and app 2( using for REST API Calls) both are flask applications


*app1 uWSGI config :*

[uwsgi]
module = wsgi
master = true
processes = 3
socket = app.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = test.log
buffer-size=7765535
worker-reload-mercy = 240
thunder-lock = true
async=10
ugreen
listen = 950
enable-threads= True

*app 1 nginx config*


user  root;
worker_processes  5;
events {
worker_connections  4000;
}
http {
server {
   limit_req zone=mylimit burst=20 nodelay;
   limit_req_status 444;
listen 80 backlog=1000;
 listen [::]:80;
server_name domain name;
location /static {
   alias /home/ubuntu/flaskapp/app/static;
}
location / {
include uwsgi_params;
uwsgi_read_timeout 120;
client_max_body_size 1000M;
  uwsgi_pass unix:///home/ubuntu/flaskapp/app.sock;
   }

}

}


*app 2 uWsgi config:*

[uwsgi]
module = wsgi
master = true
processes = 5
socket = app2.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = sptms.log
async = 10
ugreen
worker-reload-mercy = 240
enable-threads = true
thunder-lock = true
listen=2000
buffer-size=65535
no-defer-accept=true
stats=stats.sock
memory-report = true

*app 2 nginx config :*

worker_processes  1;
events {
worker_connections  1024;
}
http {
access_log /var/log/nginx/access.log;
proxy_connect_timeout 2000;
proxy_read_timeout 2000;
fastcgi_read_timeout 2000;
error_log /var/log/nginx/error.log info;
include   mime.types;
gzip on;
server {
listen 80 backlog=2048;
server_name x.x.x.x;
location / {
include uwsgi_params;
uwsgi_pass unix:///home/ubuntu/app/app2.sock;
#keepalive_timeout 155s;
}
}
}


So please help to scale the application for concurrent users.


Thanks
Ashraf
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] help

2016-11-11 Thread Norbert Wasilewski
-- Forwarded message -
From: 
Date: pt., 11.11.2016 o 17:23
Subject: Welcome to the "uWSGI" mailing list (Digest mode)
To: 


Welcome to the uWSGI@lists.unbit.it mailing list!

To post to this list, send your email to:

  uwsgi@lists.unbit.it

General information about the mailing list is at:

  http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

If you ever want to unsubscribe or change your options (eg, switch to
or from digest mode, change your password, etc.), visit your
subscription page at:

  http://lists.unbit.it/cgi-bin/mailman/options/uwsgi/kummpel%40gmail.com


You can also make such adjustments via email by sending a message to:

  uwsgi-requ...@lists.unbit.it

with the word `help' in the subject or body (don't include the
quotes), and you will get back a message with instructions.

You must know your password to change your options (including changing
the password, itself) or to unsubscribe.  It is:

  ondaogaz

Normally, Mailman will remind you of your lists.unbit.it mailing list
passwords once every month, although you can disable this if you
prefer.  This reminder will also include instructions on how to
unsubscribe or change your account options.  There is also a button on
your options page that will email your current password to you.
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] help with documentation

2015-10-19 Thread Riccardo Magliocchetti

Hello,

Il 07/10/2015 15:07, Riccardo Magliocchetti ha scritto:

Hello,

i've started labeling issues on github that i think are documentation issues:

https://github.com/unbit/uwsgi/issues?q=is%3Aopen+is%3Aissue+label%3Adocumentation

They are quite a few and most of them take just a few minutes to fix.

I've labeled everything that can help to avoid having support request filed as
github issues :)

And of course there are the one filed in the uwsgi-docs github project:
https://github.com/unbit/uwsgi-docs/issues

Anyone want to tackle some of these?


Fixed and pushed some last week. I've added a Makefile in order to build the 
documentation locally. That highlighted quite a few of sphinx warnings that 
should be easy to fix. Again any help is appreciated.


thanks

--
Riccardo Magliocchetti
@rmistaken

http://menodizero.it
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] help with documentation

2015-10-07 Thread Riccardo Magliocchetti

Hello,

i've started labeling issues on github that i think are documentation issues:

https://github.com/unbit/uwsgi/issues?q=is%3Aopen+is%3Aissue+label%3Adocumentation

They are quite a few and most of them take just a few minutes to fix.

I've labeled everything that can help to avoid having support request filed as 
github issues :)


And of course there are the one filed in the uwsgi-docs github project:
https://github.com/unbit/uwsgi-docs/issues

Anyone want to tackle some of these?

thanks

--
Riccardo Magliocchetti
@rmistaken

http://menodizero.it
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] [Help Needed] Cannot allocate memory

2015-02-06 Thread Poh Yee Hui

Hi,

I would like to ask for some help regarding issues with memory allocation.
This is the error that I have found in the log.

malloc(): Cannot allocate memory [core/utils.c line 1781]
!!! tried memory allocation of 1768781160 bytes !!!
*** backtrace of 7276 ***
*** end of backtrace ***

This is the system I am using:

 * Raspberry Pi model B (first generation)
 * Raspbian with kernel 3.12.35+
 * uWSGI 2.0.9
 * Nginx 1.2.1
 * Python 2.7.3

My uWSGI configuration is as follow.

[uwsgi]
socket = /var/run/uwsgi/myapp.sock
;http = :8080
stats = 127.0.0.1:9191
pidfile = /var/run/uwsgi/myapp.pid
daemonize = /var/log/uwsgi/myapp.log
log-slow = true

master = true

chdir = /srv/myapp
wsgi-file = /srv/myapp/main.py

processes = 4
enable-threads = true
threads = 2 ; 4
offload-threads = 0 ; 1
single-interpreter = true

harakiri = 30
limit-post = 65536
post-buffering = 8192

uid = root
;uid = appuserid
#gid = i2c

#queue = 10
queue = 10
#queue-blocksize = 50
queue-blocksize = 50
reload-os-env = true

locks = 2

The backtrace seems to be empty.
Do I have to manually activate the backtrace in order to see what is
going on?
Thanks.

--


Regards,
Poh Yee Hui

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] [Help Needed] Cannot allocate memory

2015-02-06 Thread Roberto De Ioris

 Hi,

 I would like to ask for some help regarding issues with memory allocation.
 This is the error that I have found in the log.

 malloc(): Cannot allocate memory [core/utils.c line 1781]
 !!! tried memory allocation of 1768781160 bytes !!!
 *** backtrace of 7276 ***
 *** end of backtrace ***

 This is the system I am using:

   * Raspberry Pi model B (first generation)
   * Raspbian with kernel 3.12.35+
   * uWSGI 2.0.9
   * Nginx 1.2.1
   * Python 2.7.3

 My uWSGI configuration is as follow.

 [uwsgi]
 socket = /var/run/uwsgi/myapp.sock
 ;http = :8080
 stats = 127.0.0.1:9191
 pidfile = /var/run/uwsgi/myapp.pid
 daemonize = /var/log/uwsgi/myapp.log
 log-slow = true

 master = true

 chdir = /srv/myapp
 wsgi-file = /srv/myapp/main.py

 processes = 4
 enable-threads = true
 threads = 2 ; 4
 offload-threads = 0 ; 1
 single-interpreter = true

 harakiri = 30
 limit-post = 65536
 post-buffering = 8192

 uid = root
 ;uid = appuserid
 #gid = i2c

 #queue = 10
 queue = 10
 #queue-blocksize = 50
 queue-blocksize = 50
 reload-os-env = true

 locks = 2

 The backtrace seems to be empty.
 Do I have to manually activate the backtrace in order to see what is
 going on?
 Thanks.

 --


Not on Linux, probably your process is completely messed up.

You are trying to allocate 1.6 gigs of memory.

Are you using some uwsgi api function in your app ?  If yes, can you paste
it ?

Can you paste the whole logs til the memory allocation error ?

-- 
Roberto De Ioris
http://unbit.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] [Help Needed] Cannot allocate memory

2015-02-05 Thread Poh Yee Hui

My uWSGI configuration is as follow.

[uwsgi]
socket = /var/run/uwsgi/myapp.sock
;http = :8080
stats = 127.0.0.1:9191
pidfile = /var/run/uwsgi/myapp.pid
daemonize = /var/log/uwsgi/myapp.log
log-slow = true

master = true

chdir = /srv/myapp
wsgi-file = /srv/myapp/main.py

processes = 4
enable-threads = true
threads = 2 ; 4
offload-threads = 0 ; 1
single-interpreter = true

harakiri = 30
limit-post = 65536
post-buffering = 8192

uid = root
;uid = appuserid
#gid = i2c

#queue = 10
queue = 10
#queue-blocksize = 50
queue-blocksize = 50
reload-os-env = true

locks = 2



Regards,
Poh Yee Hui

On 2/4/2015 10:36 PM, Roberto De Ioris wrote:

Hi,

I would like to ask for some help regarding issues with memory allocation.
This is the error that I have found in the log.

malloc(): Cannot allocate memory [core/utils.c line 1781]
!!! tried memory allocation of 1768781160 bytes !!!
*** backtrace of 7276 ***
*** end of backtrace ***

This is the configuration I am using:

   * Raspberry Pi model B (first generation)
   * Raspbian with kernel 3.12.35+
   * uWSGI 2.0.9
   * Nginx 1.2.1
   * Python 2.7.3

The backtrace seems to be empty.
Do I have to manually activate the backtrace in order to see what is
going on?
Thank you.

-

Hi, paste your uWSGI configuration, you are allocating a huge memory area,
let's see if it is caused by some configuration option.




___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] [Help Needed] Cannot allocate memory

2015-02-04 Thread Poh Yee Hui

Hi,

I would like to ask for some help regarding issues with memory allocation.
This is the error that I have found in the log.

malloc(): Cannot allocate memory [core/utils.c line 1781]
!!! tried memory allocation of 1768781160 bytes !!!
*** backtrace of 7276 ***
*** end of backtrace ***

This is the configuration I am using:

 * Raspberry Pi model B (first generation)
 * Raspbian with kernel 3.12.35+
 * uWSGI 2.0.9
 * Nginx 1.2.1
 * Python 2.7.3

The backtrace seems to be empty.
Do I have to manually activate the backtrace in order to see what is 
going on?

Thank you.

--


Regards,
Poh Yee Hui

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] [Help Needed] Cannot allocate memory

2015-02-04 Thread Roberto De Ioris

 Hi,

 I would like to ask for some help regarding issues with memory allocation.
 This is the error that I have found in the log.

 malloc(): Cannot allocate memory [core/utils.c line 1781]
 !!! tried memory allocation of 1768781160 bytes !!!
 *** backtrace of 7276 ***
 *** end of backtrace ***

 This is the configuration I am using:

   * Raspberry Pi model B (first generation)
   * Raspbian with kernel 3.12.35+
   * uWSGI 2.0.9
   * Nginx 1.2.1
   * Python 2.7.3

 The backtrace seems to be empty.
 Do I have to manually activate the backtrace in order to see what is
 going on?
 Thank you.

 -

Hi, paste your uWSGI configuration, you are allocating a huge memory area,
let's see if it is caused by some configuration option.


-- 
Roberto De Ioris
http://unbit.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with nginx prematurely closed connection error (Django app)

2013-07-23 Thread Łukasz Mierzwa
I thought that subscriptions packet were UDP based, my
--fastrouter-subscription-server option spawns UDP socket.

@Cycle switch from file sockets in fastrouter-subscription-server and
subscripe2 options to port based ones.


2013/7/23 Roberto De Ioris robe...@unbit.it


 
 
 
  On 23/07/13 13:43, Roberto De Ioris wrote:
 
  On 23/07/13 11:54, Roberto De Ioris wrote:
 
  Sory, but can you make a simple setup without the fastrouter and the
  subscription system to check if the django part is ok ? What you are
  experiencing is the fastrouter closing the connection as no backend is
  registered. Before starting dealing with it i would ensure the simple
  parts are ok
 
 
  Roberto,
 
  Yes, everything works with a plain server, run manually with
 
  ./bin/uwsgi --xml test.xml
 
  No emperor, no router, no subscription-server.
 
  subscribe_to/subscribe-to makes no difference to the complex setup.
 
  gmf
 
 
 
  Ok, the emperor config is correct so i will focus on the vassal.
 
  Start the emperor instance (the one containing the fastrouter too) on a
  directory without vassals, and manually spawn a fake instance
  subscribing
  to it:
 
  uwsgi --subscribe-to path:domain --socket /tmp/foobar
 
  What you see in the logs of both the fake instance and the
  emperor/fastrouter ?
 
 
 
  Ah, a new error in the fake instance log:
 
  subscribing to
  /home/gmf/working/bugle-frontend.git/run/uwsgi-router.sock:dev1.localhost
  send_udp_message()/sendto(): Protocol wrong type for socket
  [core/protocol.c line 83]
  send_udp_message()/sendto(): Protocol wrong type for socket
  [core/protocol.c line 83]
  send_udp_message()/sendto(): Protocol wrong type for socket
  [core/protocol.c line 83]
 
  Haven't seen that before.
 
 
 
 
  ___
  uWSGI mailing list
  uWSGI@lists.unbit.it
  http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
 


 wow, cool :)

 for some reason the parser thinks you are using an udp address (???)

 try with the explicit form:

 --subscribe2 server=path,key=domain

 --
 Roberto De Ioris
 http://unbit.it
 ___
 uWSGI mailing list
 uWSGI@lists.unbit.it
 http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi




-- 
Łukasz Mierzwa
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Andrew Fischer
I've been running uwsgi for about a year and I've run into a situation
that I can't seem to sort out. I'm not positive there is a good
solution, but maybe someone with far more knowledge than I could shed
some light. I'll give some background, please bear with me.

I run uwsig behind nginx to run a simple mercurial hgweb server. My
uwsgi configuration is pretty basic:

-M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
--pythonpath $hgwebpath --module $hgwebmodule

However, I recently added buildbot to our setup, which is triggered by
a commit hook in hgweb. It's all built in stuff, I didn't write any of
it.

Unfortunately this hook uses fork, and so generates defunct uwsgi
instances when it occurs. It appears to be a known issue with the
buildbot.

I decided uwsgi's --reaper option looked like it might help me out. It
did the trick, very handy since I didn't want to wade into the
buildbot codebase. Like the manual for --reaper says you should fix
your process spawning usage (if you can) ... and I don't think I can.

However, after enabling reaper I noticed that very large commit pushes
to hgweb over http would cause the process to be killed. It would
happen anytime a push of 20MB or larger was pushed up to the server.
(This is extremely rare, we just happen to have a project that carries
this much baggage).

After a lot of reading and testing, I found that by removing the
--reaper option from uswgi, the commits would no longer be killed. I
could push up as large a bundle as I liked (+100MB). However, without
the reaper my buildbot is back to leaving zombies all over the place.

Do any of you know more about the --reaper option, and if there is any
additional control over how it determines what a zombie process is? Or
is there is a different uwsgi option I should use? I fully realize
uwsgi is not the problem here; I blame uwsig and buildbot. But since
uwsgi is so flexible I wondered if there might be a way to have my
cake and eat it too, so to speak.

Big thanks for any feedback.
-Andrew



-- 
Andrew Fischer
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Andrew Fischer
Sorry, I meant to say at the end I realize uwsgi is not the problem
here; I blame *hgweb* and buildbot.

-Andrew


On Mon, Oct 22, 2012 at 9:14 AM, Andrew Fischer wizzr...@gmail.com wrote:
 I've been running uwsgi for about a year and I've run into a situation
 that I can't seem to sort out. I'm not positive there is a good
 solution, but maybe someone with far more knowledge than I could shed
 some light. I'll give some background, please bear with me.

 I run uwsig behind nginx to run a simple mercurial hgweb server. My
 uwsgi configuration is pretty basic:

 -M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
 --pythonpath $hgwebpath --module $hgwebmodule

 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.

 Unfortunately this hook uses fork, and so generates defunct uwsgi
 instances when it occurs. It appears to be a known issue with the
 buildbot.

 I decided uwsgi's --reaper option looked like it might help me out. It
 did the trick, very handy since I didn't want to wade into the
 buildbot codebase. Like the manual for --reaper says you should fix
 your process spawning usage (if you can) ... and I don't think I can.

 However, after enabling reaper I noticed that very large commit pushes
 to hgweb over http would cause the process to be killed. It would
 happen anytime a push of 20MB or larger was pushed up to the server.
 (This is extremely rare, we just happen to have a project that carries
 this much baggage).

 After a lot of reading and testing, I found that by removing the
 --reaper option from uswgi, the commits would no longer be killed. I
 could push up as large a bundle as I liked (+100MB). However, without
 the reaper my buildbot is back to leaving zombies all over the place.

 Do any of you know more about the --reaper option, and if there is any
 additional control over how it determines what a zombie process is? Or
 is there is a different uwsgi option I should use? I fully realize
 uwsgi is not the problem here; I blame uwsig and buildbot. But since
 uwsgi is so flexible I wondered if there might be a way to have my
 cake and eat it too, so to speak.

 Big thanks for any feedback.
 -Andrew



 --
 Andrew Fischer



-- 
Andrew Fischer
LT Engineering Software
http://ltengsoft.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Łukasz Mierzwa
2012/10/22 Andrew Fischer wizzr...@gmail.com:

 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.

Do You use this hook:

http://buildbot.net/buildbot/docs/0.8.7/manual/cfg-changesources.html#mercurial-hook

?? If not then please share its code.

-- 
Łukasz Mierzwa
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Roberto De Ioris

Il giorno 22/ott/2012, alle ore 16:14, Andrew Fischer wizzr...@gmail.com ha 
scritto:

 I've been running uwsgi for about a year and I've run into a situation
 that I can't seem to sort out. I'm not positive there is a good
 solution, but maybe someone with far more knowledge than I could shed
 some light. I'll give some background, please bear with me.
 
 I run uwsig behind nginx to run a simple mercurial hgweb server. My
 uwsgi configuration is pretty basic:
 
 -M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
 --pythonpath $hgwebpath --module $hgwebmodule
 
 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.
 
 Unfortunately this hook uses fork, and so generates defunct uwsgi
 instances when it occurs. It appears to be a known issue with the
 buildbot.
 
 I decided uwsgi's --reaper option looked like it might help me out. It
 did the trick, very handy since I didn't want to wade into the
 buildbot codebase. Like the manual for --reaper says you should fix
 your process spawning usage (if you can) ... and I don't think I can.
 
 However, after enabling reaper I noticed that very large commit pushes
 to hgweb over http would cause the process to be killed. It would
 happen anytime a push of 20MB or larger was pushed up to the server.
 (This is extremely rare, we just happen to have a project that carries
 this much baggage).


Do you get some c traceback or specific loglines when the worker dies ?


--
Roberto De Ioris
http://unbit.it
JID: robe...@jabber.unbit.it

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help me please with accessing cache between 2 instances

2012-05-22 Thread test157
Hello Roberto,

I'am getting this error while using it on the same machine, so I don't
use it over the network.


___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] Help me please with accessing cache between 2 instances

2012-05-19 Thread test157
Hi everyone.

I have 2 uwsgi instances

1. the first one is  running  at   127.0.0.1:10001  with cache = 1000 
(reads/writes
cache)
2.the secondisrunning  at  127.0.0.1:10002  with  cache  =  1
(reads/deletes values only from the first instance)

when  I'am  trying  to  access  cache,  from the second instance with this
command:  uwsgi.cache_get(val, '127.0.0.1:10001')

I'am getting this error:
read(): Operation now in progress [proto/uwsgi.c line 40]
Sat May 19 06:46:40 2012 - error parsing request

do I do something totally wrong?


___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi