Why would you be running a small website in ASGI mode with a single worker? 
My suspicion is that someone using Django in ASGI mode has a specific 
reason to do so. Otherwise, why not run it in WSGI mode?


On Monday, September 26, 2016 at 2:25:04 PM UTC-5, ludovic coues wrote:
>
> What you call a pathological case is a small website, running on 
> something like cheap VPS. 
>
>
>
> 2016-09-26 15:59 GMT+02:00 Chris Foresman <fore...@gmail.com <javascript:>>: 
>
> > Robert, 
> > 
> > Thanks! This really does clear things up. The results were a little 
> > surprising at first blush since I believe part of the idea behind 
> channels 
> > is to be able to serve more requests concurrently than a single-threaded 
> > approach typically allows. This is why I don't think this benchmark 
> alone is 
> > very useful. We already knew it would be slower to serve requests with a 
> > single worker given the overhead as you described. So what does this 
> > benchmark get us? Is it merely to characterize the performance 
> difference in 
> > the pathological case? I think ramping up the number of workers on a 
> single 
> > machine would be an interesting next step, no? 
> > 
> > Anyway, thanks for taking the time to do this work and help us 
> understand 
> > the results. 
> > 
> > 
> > 
> > On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote: 
> >> 
> >> Hey Chris, 
> >> 
> >> Sure thing! I'm going to add a little color to this; probably a little 
> >> more than required. 
> >> 
> >> I have gunciorn for comparison on both graphs because channels supports 
> >> HTTP requests, so we wanted to see how it would do against a serious 
> >> production environment option. I could have equally done uwsgi. I chose 
> >> gunicorn out of convenience. It serves as a control for the redis 
> channels 
> >> setup. 
> >> 
> >> The main point of comparison is to say: yeah, Daphne has an order of 
> >> magnitude higher latency than gunicorn, and as a consequence, it's 
> >> throughput in the same period of time as gunicorn is less. This really 
> >> shouldn't be surprising. Channels is processing an HTTP request, 
> stuffing it 
> >> in a redis queue, having a worker pull it out, process it, and then 
> send a 
> >> response back through the queue. This has some innate overhead in it. 
> >> 
> >> You'll note I didn't include IPC for latency comparison. It's because 
> it's 
> >> so bad that it would make the graph unreadable. You can get the sense 
> of 
> >> that when you see it's throughput. So don't use it for serious 
> production 
> >> machines. Use it for a dev environment when you don't want a complex 
> setup, 
> >> or use it with nginx splitting traffic for just websockets if you don't 
> want 
> >> to run redis for some reason. 
> >> 
> >> 
> >> 
> >> Robert Roskam 
> >> 
> >> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman 
> >> wrote: 
> >>> 
> >>> Yes. Honestly, just explain what these results mean in words, because 
> I 
> >>> cannot turn these graphs into anything meaningful on my own. 
> >>> 
> >>> 
> >>> 
> >>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam 
> wrote: 
> >>>> 
> >>>> Hey Chris, 
> >>>> 
> >>>> The goal of these tests is to see how channels performs with normal 
> HTTP 
> >>>> traffic under heavy load with a control. In order to compare 
> accurately, I 
> >>>> tried to eliminate variances as much as possible. 
> >>>> 
> >>>> So yes, there was one worker for both Redis and IPC setups. I 
> provided 
> >>>> the supervisor configs, as I figured those would be helpful in 
> describing 
> >>>> exactly what commands were run on each system. 
> >>>> 
> >>>> Does that help bring some context? Or would you like for me to 
> elaborate 
> >>>> further on some point? 
> >>>> 
> >>>> Thanks, 
> >>>> Robert 
> >>>> 
> >>>> 
> >>>> On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman 
> wrote: 
> >>>>> 
> >>>>> Is this one worker each? I also don't really understand the 
> implication 
> >>>>> of the results. There's no context to explain the numbers nor if one 
> result 
> >>>>> is better than another. 
> >>>>> 
> >>>>> On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam 
> wrote: 
> >>>>>> 
> >>>>>> Hello All, 
> >>>>>> 
> >>>>>> The following is an initial report of Django Channels performance. 
> >>>>>> While this is being shared in other media channels at this time, I 
> fully 
> >>>>>> expect to get some questions or clarifications from this group in 
> >>>>>> particular, and I'll be happy to add to that README anything to 
> help 
> >>>>>> describe the results. 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> https://github.com/django/channels/blob/master/loadtesting/2016-09-06/README.rst
>  
> >>>>>> 
> >>>>>> 
> >>>>>> Robert Roskam 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Django developers (Contributions to Django itself)" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to django-develop...@googlegroups.com <javascript:>. 
> > To post to this group, send email to django-d...@googlegroups.com 
> <javascript:>. 
> > Visit this group at https://groups.google.com/group/django-developers. 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/django-developers/f58727ac-fc47-439b-8943-eddf4021a96f%40googlegroups.com.
>  
>
> > 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
>
> -- 
>
> Cordialement, Coues Ludovic 
> +336 148 743 42 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/645a5190-2437-4d6e-b8df-526650bfd2c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to