Re: Django Channels Load Testing Results

2016-09-26 Thread ludovic coues
For exemple, student trying to do an interactive browser game.
>From what I understood, ASGI main objective is to be the standard for
websocket with django.

In my opinion, the tested case is not pathological. It is the default
one. Django configured barely enough to have stuff working.

I agree that the benchmark only show one face of the truth. Maybe ASGI
scale way better than WSGI. Maybe ASGI require only a fraction of the
CPU or memory required by WSGI. I don't know.

But the use case isn't pathological. If the default are the worst
configuration possible, something is wrong.

2016-09-26 22:30 GMT+02:00 Chris Foresman :
> Why would you be running a small website in ASGI mode with a single worker?
> My suspicion is that someone using Django in ASGI mode has a specific reason
> to do so. Otherwise, why not run it in WSGI mode?
>
>
> On Monday, September 26, 2016 at 2:25:04 PM UTC-5, ludovic coues wrote:
>>
>> What you call a pathological case is a small website, running on
>> something like cheap VPS.
>>
>>
>>
>> 2016-09-26 15:59 GMT+02:00 Chris Foresman :
>> > Robert,
>> >
>> > Thanks! This really does clear things up. The results were a little
>> > surprising at first blush since I believe part of the idea behind
>> > channels
>> > is to be able to serve more requests concurrently than a single-threaded
>> > approach typically allows. This is why I don't think this benchmark
>> > alone is
>> > very useful. We already knew it would be slower to serve requests with a
>> > single worker given the overhead as you described. So what does this
>> > benchmark get us? Is it merely to characterize the performance
>> > difference in
>> > the pathological case? I think ramping up the number of workers on a
>> > single
>> > machine would be an interesting next step, no?
>> >
>> > Anyway, thanks for taking the time to do this work and help us
>> > understand
>> > the results.
>> >
>> >
>> >
>> > On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote:
>> >>
>> >> Hey Chris,
>> >>
>> >> Sure thing! I'm going to add a little color to this; probably a little
>> >> more than required.
>> >>
>> >> I have gunciorn for comparison on both graphs because channels supports
>> >> HTTP requests, so we wanted to see how it would do against a serious
>> >> production environment option. I could have equally done uwsgi. I chose
>> >> gunicorn out of convenience. It serves as a control for the redis
>> >> channels
>> >> setup.
>> >>
>> >> The main point of comparison is to say: yeah, Daphne has an order of
>> >> magnitude higher latency than gunicorn, and as a consequence, it's
>> >> throughput in the same period of time as gunicorn is less. This really
>> >> shouldn't be surprising. Channels is processing an HTTP request,
>> >> stuffing it
>> >> in a redis queue, having a worker pull it out, process it, and then
>> >> send a
>> >> response back through the queue. This has some innate overhead in it.
>> >>
>> >> You'll note I didn't include IPC for latency comparison. It's because
>> >> it's
>> >> so bad that it would make the graph unreadable. You can get the sense
>> >> of
>> >> that when you see it's throughput. So don't use it for serious
>> >> production
>> >> machines. Use it for a dev environment when you don't want a complex
>> >> setup,
>> >> or use it with nginx splitting traffic for just websockets if you don't
>> >> want
>> >> to run redis for some reason.
>> >>
>> >>
>> >>
>> >> Robert Roskam
>> >>
>> >> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman
>> >> wrote:
>> >>>
>> >>> Yes. Honestly, just explain what these results mean in words, because
>> >>> I
>> >>> cannot turn these graphs into anything meaningful on my own.
>> >>>
>> >>>
>> >>>
>> >>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam
>> >>> wrote:
>> 
>>  Hey Chris,
>> 
>>  The goal of these tests is to see how channels performs with normal
>>  HTTP
>>  traffic under heavy load with a control. In order to compare
>>  accurately, I
>>  tried to eliminate variances as much as possible.
>> 
>>  So yes, there was one worker for both Redis and IPC setups. I
>>  provided
>>  the supervisor configs, as I figured those would be helpful in
>>  describing
>>  exactly what commands were run on each system.
>> 
>>  Does that help bring some context? Or would you like for me to
>>  elaborate
>>  further on some point?
>> 
>>  Thanks,
>>  Robert
>> 
>> 
>>  On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman
>>  wrote:
>> >
>> > Is this one worker each? I also don't really understand the
>> > implication
>> > of the results. There's no context to explain the numbers nor if one
>> > result
>> > is better than another.
>> >
>> > On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam
>> > wrote:
>> >>
>> >> Hello 

Re: Django Channels Load Testing Results

2016-09-26 Thread Andrew Godwin
You might want to run a small site with WebSockets - there are a number of
reasons to use ASGI mode, and it's important we make it scale down as well
as up.

Andrew

On Mon, Sep 26, 2016 at 1:30 PM, Chris Foresman  wrote:

> Why would you be running a small website in ASGI mode with a single
> worker? My suspicion is that someone using Django in ASGI mode has a
> specific reason to do so. Otherwise, why not run it in WSGI mode?
>
>
> On Monday, September 26, 2016 at 2:25:04 PM UTC-5, ludovic coues wrote:
>>
>> What you call a pathological case is a small website, running on
>> something like cheap VPS.
>>
>>
>>
>> 2016-09-26 15:59 GMT+02:00 Chris Foresman :
>> > Robert,
>> >
>> > Thanks! This really does clear things up. The results were a little
>> > surprising at first blush since I believe part of the idea behind
>> channels
>> > is to be able to serve more requests concurrently than a
>> single-threaded
>> > approach typically allows. This is why I don't think this benchmark
>> alone is
>> > very useful. We already knew it would be slower to serve requests with
>> a
>> > single worker given the overhead as you described. So what does this
>> > benchmark get us? Is it merely to characterize the performance
>> difference in
>> > the pathological case? I think ramping up the number of workers on a
>> single
>> > machine would be an interesting next step, no?
>> >
>> > Anyway, thanks for taking the time to do this work and help us
>> understand
>> > the results.
>> >
>> >
>> >
>> > On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote:
>> >>
>> >> Hey Chris,
>> >>
>> >> Sure thing! I'm going to add a little color to this; probably a little
>> >> more than required.
>> >>
>> >> I have gunciorn for comparison on both graphs because channels
>> supports
>> >> HTTP requests, so we wanted to see how it would do against a serious
>> >> production environment option. I could have equally done uwsgi. I
>> chose
>> >> gunicorn out of convenience. It serves as a control for the redis
>> channels
>> >> setup.
>> >>
>> >> The main point of comparison is to say: yeah, Daphne has an order of
>> >> magnitude higher latency than gunicorn, and as a consequence, it's
>> >> throughput in the same period of time as gunicorn is less. This really
>> >> shouldn't be surprising. Channels is processing an HTTP request,
>> stuffing it
>> >> in a redis queue, having a worker pull it out, process it, and then
>> send a
>> >> response back through the queue. This has some innate overhead in it.
>> >>
>> >> You'll note I didn't include IPC for latency comparison. It's because
>> it's
>> >> so bad that it would make the graph unreadable. You can get the sense
>> of
>> >> that when you see it's throughput. So don't use it for serious
>> production
>> >> machines. Use it for a dev environment when you don't want a complex
>> setup,
>> >> or use it with nginx splitting traffic for just websockets if you
>> don't want
>> >> to run redis for some reason.
>> >>
>> >>
>> >>
>> >> Robert Roskam
>> >>
>> >> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman
>> >> wrote:
>> >>>
>> >>> Yes. Honestly, just explain what these results mean in words, because
>> I
>> >>> cannot turn these graphs into anything meaningful on my own.
>> >>>
>> >>>
>> >>>
>> >>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam
>> wrote:
>> 
>>  Hey Chris,
>> 
>>  The goal of these tests is to see how channels performs with normal
>> HTTP
>>  traffic under heavy load with a control. In order to compare
>> accurately, I
>>  tried to eliminate variances as much as possible.
>> 
>>  So yes, there was one worker for both Redis and IPC setups. I
>> provided
>>  the supervisor configs, as I figured those would be helpful in
>> describing
>>  exactly what commands were run on each system.
>> 
>>  Does that help bring some context? Or would you like for me to
>> elaborate
>>  further on some point?
>> 
>>  Thanks,
>>  Robert
>> 
>> 
>>  On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman
>> wrote:
>> >
>> > Is this one worker each? I also don't really understand the
>> implication
>> > of the results. There's no context to explain the numbers nor if
>> one result
>> > is better than another.
>> >
>> > On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam
>> wrote:
>> >>
>> >> Hello All,
>> >>
>> >> The following is an initial report of Django Channels performance.
>> >> While this is being shared in other media channels at this time, I
>> fully
>> >> expect to get some questions or clarifications from this group in
>> >> particular, and I'll be happy to add to that README anything to
>> help
>> >> describe the results.
>> >>
>> >>
>> >> https://github.com/django/channels/blob/master/loadtesting/
>> 2016-09-06/README.rst
>> 

Re: Django Channels Load Testing Results

2016-09-26 Thread Chris Foresman
Why would you be running a small website in ASGI mode with a single worker? 
My suspicion is that someone using Django in ASGI mode has a specific 
reason to do so. Otherwise, why not run it in WSGI mode?


On Monday, September 26, 2016 at 2:25:04 PM UTC-5, ludovic coues wrote:
>
> What you call a pathological case is a small website, running on 
> something like cheap VPS. 
>
>
>
> 2016-09-26 15:59 GMT+02:00 Chris Foresman : 
>
> > Robert, 
> > 
> > Thanks! This really does clear things up. The results were a little 
> > surprising at first blush since I believe part of the idea behind 
> channels 
> > is to be able to serve more requests concurrently than a single-threaded 
> > approach typically allows. This is why I don't think this benchmark 
> alone is 
> > very useful. We already knew it would be slower to serve requests with a 
> > single worker given the overhead as you described. So what does this 
> > benchmark get us? Is it merely to characterize the performance 
> difference in 
> > the pathological case? I think ramping up the number of workers on a 
> single 
> > machine would be an interesting next step, no? 
> > 
> > Anyway, thanks for taking the time to do this work and help us 
> understand 
> > the results. 
> > 
> > 
> > 
> > On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote: 
> >> 
> >> Hey Chris, 
> >> 
> >> Sure thing! I'm going to add a little color to this; probably a little 
> >> more than required. 
> >> 
> >> I have gunciorn for comparison on both graphs because channels supports 
> >> HTTP requests, so we wanted to see how it would do against a serious 
> >> production environment option. I could have equally done uwsgi. I chose 
> >> gunicorn out of convenience. It serves as a control for the redis 
> channels 
> >> setup. 
> >> 
> >> The main point of comparison is to say: yeah, Daphne has an order of 
> >> magnitude higher latency than gunicorn, and as a consequence, it's 
> >> throughput in the same period of time as gunicorn is less. This really 
> >> shouldn't be surprising. Channels is processing an HTTP request, 
> stuffing it 
> >> in a redis queue, having a worker pull it out, process it, and then 
> send a 
> >> response back through the queue. This has some innate overhead in it. 
> >> 
> >> You'll note I didn't include IPC for latency comparison. It's because 
> it's 
> >> so bad that it would make the graph unreadable. You can get the sense 
> of 
> >> that when you see it's throughput. So don't use it for serious 
> production 
> >> machines. Use it for a dev environment when you don't want a complex 
> setup, 
> >> or use it with nginx splitting traffic for just websockets if you don't 
> want 
> >> to run redis for some reason. 
> >> 
> >> 
> >> 
> >> Robert Roskam 
> >> 
> >> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman 
> >> wrote: 
> >>> 
> >>> Yes. Honestly, just explain what these results mean in words, because 
> I 
> >>> cannot turn these graphs into anything meaningful on my own. 
> >>> 
> >>> 
> >>> 
> >>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam 
> wrote: 
>  
>  Hey Chris, 
>  
>  The goal of these tests is to see how channels performs with normal 
> HTTP 
>  traffic under heavy load with a control. In order to compare 
> accurately, I 
>  tried to eliminate variances as much as possible. 
>  
>  So yes, there was one worker for both Redis and IPC setups. I 
> provided 
>  the supervisor configs, as I figured those would be helpful in 
> describing 
>  exactly what commands were run on each system. 
>  
>  Does that help bring some context? Or would you like for me to 
> elaborate 
>  further on some point? 
>  
>  Thanks, 
>  Robert 
>  
>  
>  On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman 
> wrote: 
> > 
> > Is this one worker each? I also don't really understand the 
> implication 
> > of the results. There's no context to explain the numbers nor if one 
> result 
> > is better than another. 
> > 
> > On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam 
> wrote: 
> >> 
> >> Hello All, 
> >> 
> >> The following is an initial report of Django Channels performance. 
> >> While this is being shared in other media channels at this time, I 
> fully 
> >> expect to get some questions or clarifications from this group in 
> >> particular, and I'll be happy to add to that README anything to 
> help 
> >> describe the results. 
> >> 
> >> 
> >> 
> https://github.com/django/channels/blob/master/loadtesting/2016-09-06/README.rst
>  
> >> 
> >> 
> >> Robert Roskam 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Django developers (Contributions to Django itself)" group. 
> > To unsubscribe from this group and stop receiving emails 

Re: Django Channels Load Testing Results

2016-09-26 Thread ludovic coues
What you call a pathological case is a small website, running on
something like cheap VPS.



2016-09-26 15:59 GMT+02:00 Chris Foresman :
> Robert,
>
> Thanks! This really does clear things up. The results were a little
> surprising at first blush since I believe part of the idea behind channels
> is to be able to serve more requests concurrently than a single-threaded
> approach typically allows. This is why I don't think this benchmark alone is
> very useful. We already knew it would be slower to serve requests with a
> single worker given the overhead as you described. So what does this
> benchmark get us? Is it merely to characterize the performance difference in
> the pathological case? I think ramping up the number of workers on a single
> machine would be an interesting next step, no?
>
> Anyway, thanks for taking the time to do this work and help us understand
> the results.
>
>
>
> On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote:
>>
>> Hey Chris,
>>
>> Sure thing! I'm going to add a little color to this; probably a little
>> more than required.
>>
>> I have gunciorn for comparison on both graphs because channels supports
>> HTTP requests, so we wanted to see how it would do against a serious
>> production environment option. I could have equally done uwsgi. I chose
>> gunicorn out of convenience. It serves as a control for the redis channels
>> setup.
>>
>> The main point of comparison is to say: yeah, Daphne has an order of
>> magnitude higher latency than gunicorn, and as a consequence, it's
>> throughput in the same period of time as gunicorn is less. This really
>> shouldn't be surprising. Channels is processing an HTTP request, stuffing it
>> in a redis queue, having a worker pull it out, process it, and then send a
>> response back through the queue. This has some innate overhead in it.
>>
>> You'll note I didn't include IPC for latency comparison. It's because it's
>> so bad that it would make the graph unreadable. You can get the sense of
>> that when you see it's throughput. So don't use it for serious production
>> machines. Use it for a dev environment when you don't want a complex setup,
>> or use it with nginx splitting traffic for just websockets if you don't want
>> to run redis for some reason.
>>
>>
>>
>> Robert Roskam
>>
>> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman
>> wrote:
>>>
>>> Yes. Honestly, just explain what these results mean in words, because I
>>> cannot turn these graphs into anything meaningful on my own.
>>>
>>>
>>>
>>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam wrote:

 Hey Chris,

 The goal of these tests is to see how channels performs with normal HTTP
 traffic under heavy load with a control. In order to compare accurately, I
 tried to eliminate variances as much as possible.

 So yes, there was one worker for both Redis and IPC setups. I provided
 the supervisor configs, as I figured those would be helpful in describing
 exactly what commands were run on each system.

 Does that help bring some context? Or would you like for me to elaborate
 further on some point?

 Thanks,
 Robert


 On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman wrote:
>
> Is this one worker each? I also don't really understand the implication
> of the results. There's no context to explain the numbers nor if one 
> result
> is better than another.
>
> On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam wrote:
>>
>> Hello All,
>>
>> The following is an initial report of Django Channels performance.
>> While this is being shared in other media channels at this time, I fully
>> expect to get some questions or clarifications from this group in
>> particular, and I'll be happy to add to that README anything to help
>> describe the results.
>>
>>
>> https://github.com/django/channels/blob/master/loadtesting/2016-09-06/README.rst
>>
>>
>> Robert Roskam
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at https://groups.google.com/group/django-developers.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-developers/f58727ac-fc47-439b-8943-eddf4021a96f%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



-- 

Cordialement, Coues Ludovic
+336 148 743 42

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop 

[ANNOUNCE] Django security releases issued: 1.9.10 and 1.8.15

2016-09-26 Thread Tim Graham
Today the Django team issued 1.9.10 and 1.8.15 as part of our security 
process. These releases address a security issue, and we encourage all 
users to upgrade as soon as possible.

Details are available on the Django project weblog:

https://www.djangoproject.com/weblog/2016/sep/26/security-releases/

As a reminder, we ask that potential security issues be reported via 
private email to secur...@djangoproject.com and not via Django's Trac 
instance or the django-developers list. Please see 
https://www.djangoproject.com/security for further information.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/94b6bfdb-3b5e-459d-8e74-054325ee56f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Allow validators to short-circuit in form field validation

2016-09-26 Thread Aymeric Augustin
Note that I made this comment in reaction to Alexey’s email here :-)

-- 
Aymeric.

> On 26 Sep 2016, at 14:34, charettes  wrote:
> 
> Hi Alexey,
> 
> I'm not sure I understand why the approach Aymeric suggested is not viable for
> your use case.
> 
> It can be implemented in a few lines and doesn't require any modification to
> Django core.
> 
> 
> class ShortCircuitValidator(object):
> def __init__(self, *validators):
> self.validators = validators
> 
> def __call__(self, value):
> for validator in self.validators:
> validator(value)
> 
> 
> class FileForm(forms.Form):
> file = forms.FileField(
> validators=[ShortCircuitValidator(
> FileSizeValidator(max_size='500 kb'),
> FileTypeValidator(extensions=['xlsx']),
> )],
> )
> 
> Simon
> 
> Le lundi 26 septembre 2016 07:05:36 UTC-4, Alexey Rogachev a écrit :
> I opened ticket https://code.djangoproject.com/ticket/27263 
> .
> 
> Do you think the solution I suggested in comment is OK?
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to django-developers+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to django-developers@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/django-developers 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/django-developers/5895bcc4-1289-4adf-80a3-f595806d49a7%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/59EAADD5-6343-4A78-BB4C-C5CB4254823F%40polytechnique.org.
For more options, visit https://groups.google.com/d/optout.


Re: Django Channels Load Testing Results

2016-09-26 Thread Chris Foresman
Robert,

Thanks! This really does clear things up. The results were a little 
surprising at first blush since I believe part of the idea behind channels 
is to be able to serve more requests concurrently than a single-threaded 
approach typically allows. This is why I don't think this benchmark alone 
is very useful. We already knew it would be slower to serve requests with a 
single worker given the overhead as you described. So what does this 
benchmark get us? Is it merely to characterize the performance difference 
in the pathological case? I think ramping up the number of workers on a 
single machine would be an interesting next step, no?

Anyway, thanks for taking the time to do this work and help us understand 
the results.



On Sunday, September 25, 2016 at 8:23:45 PM UTC-5, Robert Roskam wrote:
>
> Hey Chris,
>
> Sure thing! I'm going to add a little color to this; probably a little 
> more than required.
>
> I have gunciorn for comparison on both graphs because channels supports 
> HTTP requests, so we wanted to see how it would do against a serious 
> production environment option. I could have equally done uwsgi. I chose 
> gunicorn out of convenience. It serves as a control for the redis channels 
> setup.
>
> The main point of comparison is to say: yeah, Daphne has an order of 
> magnitude higher latency than gunicorn, and as a consequence, it's 
> throughput in the same period of time as gunicorn is less. This really 
> shouldn't be surprising. Channels is processing an HTTP request, stuffing 
> it in a redis queue, having a worker pull it out, process it, and then send 
> a response back through the queue. This has some innate overhead in it. 
>
> You'll note I didn't include IPC for latency comparison. It's because it's 
> so bad that it would make the graph unreadable. You can get the sense of 
> that when you see it's throughput. So don't use it for serious production 
> machines. Use it for a dev environment when you don't want a complex setup, 
> or use it with nginx splitting traffic for just websockets if you don't 
> want to run redis for some reason.
>
>
>
> Robert Roskam
>
> On Wednesday, September 14, 2016 at 10:21:27 AM UTC-4, Chris Foresman 
> wrote:
>>
>> Yes. Honestly, just explain what these results mean in words, because I 
>> cannot turn these graphs into anything meaningful on my own.
>>
>>
>>
>> On Monday, September 12, 2016 at 8:41:05 PM UTC-5, Robert Roskam wrote:
>>>
>>> Hey Chris,
>>>
>>> The goal of these tests is to see how channels performs with normal HTTP 
>>> traffic under heavy load with a control. In order to compare accurately, I 
>>> tried to eliminate variances as much as possible. 
>>>
>>> So yes, there was one worker for both Redis and IPC setups. I provided 
>>> the supervisor configs, as I figured those would be helpful in describing 
>>> exactly what commands were run on each system.
>>>
>>> Does that help bring some context? Or would you like for me to elaborate 
>>> further on some point?
>>>
>>> Thanks,
>>> Robert
>>>
>>>
>>> On Monday, September 12, 2016 at 2:38:59 PM UTC-4, Chris Foresman wrote:

 Is this one worker each? I also don't really understand the implication 
 of the results. There's no context to explain the numbers nor if one 
 result 
 is better than another.

 On Sunday, September 11, 2016 at 7:46:52 AM UTC-5, Robert Roskam wrote:
>
> Hello All,
>
> The following is an initial report of Django Channels performance. 
> While this is being shared in other media channels at this time, I fully 
> expect to get some questions or clarifications from this group in 
> particular, and I'll be happy to add to that README anything to help 
> describe the results.
>
>
> https://github.com/django/channels/blob/master/loadtesting/2016-09-06/README.rst
>
>
> Robert Roskam
>


-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/f58727ac-fc47-439b-8943-eddf4021a96f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Allow validators to short-circuit in form field validation

2016-09-26 Thread charettes
Hi Alexey,

I'm not sure I understand why the approach Aymeric suggested is not viable 
for
your use case.

It can be implemented in a few lines and doesn't require any modification to
Django core.


class ShortCircuitValidator(object):
def __init__(self, *validators):
self.validators = validators

def __call__(self, value):
for validator in self.validators:
validator(value)


class FileForm(forms.Form):
file = forms.FileField(
validators=[ShortCircuitValidator(
FileSizeValidator(max_size='500 kb'),
FileTypeValidator(extensions=['xlsx']),
)],
)

Simon

Le lundi 26 septembre 2016 07:05:36 UTC-4, Alexey Rogachev a écrit :
>
> I opened ticket https://code.djangoproject.com/ticket/27263.
>
> Do you think the solution I suggested in comment is OK?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/5895bcc4-1289-4adf-80a3-f595806d49a7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Allow validators to short-circuit in form field validation

2016-09-26 Thread Alexey Rogachev
I opened ticket https://code.djangoproject.com/ticket/27263.

Do you think the solution I suggested in comment is OK?

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/d912c6c1-ff17-48fb-acaf-6659b78c6edd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.