Re: re-thinking middleware

2016-05-06 Thread Carl Meyer
I agree with Simon on both counts. We do usually continue to test
deprecated code paths until they are removed, but I also think the
duplication in cases of tests overriding MIDDLEWARE_CLASSES might not be
necessary in _all_ cases; I think some discretion could be used
depending on to what extent the middleware is incidental to the tests vs
the direct subject of the test. But it might be simpler to just do them
all than to make that determination.

Carl

On 05/04/2016 08:57 PM, charettes wrote:
> Hi Tim,
> 
> I think we should favor displaying a message in accordance with the
> setting the user is using as it will make the transition less confusing.
> In the case of the documented check message I think using the form
> "MIDDLEWARE/MIDDLEWARE_CLASSES" would make it easier to read then
> mentioning the two possible variants. We already alter the document
> messages anyway to account for their dynamic nature.
> 
> In the case of the tests I believe both code path should continue to be
> tested. From the top of my head I can't think of an alternative to
> subclasses using @override_settings. I suggest we make the *legacy*
> tests class extend the MIDDLEWARE using test class and not the other way
> around as it will make the MIDDLEWARE_CLASSES code removal clearer.
> 
> Simon
> 
> Le mercredi 4 mai 2016 19:59:05 UTC-4, Tim Graham a écrit :
> 
> I've been working on this and wanted to raise a couple points for
> discussion.
> 
> How should we treat error messages in place like system checks where
> we have phrases like "Edit your MIDDLEWARE_CLASSES" ... of course
> the check can easily check both MIDDLEWARE and MIDDLEWARE_CLASSES
> without much effort but should we make the error message "smart" and
> display the version of the setting that the user is using?
> Alternatively, we could always reference MIDDLEWARE (the
> non-deprecated version) or use some variation like
> "MIDDLEWARE(_CLASSES)" or "MIDDLEWARE/MIDDLEWARE_CLASSES" until the
> deprecation period ends.
> 
> Another point for discussion is whether we need to duplicate a lot
> of tests so we test that the middleware continue to work with both
> the old-style MIDDLEWARE_CLASSES and the new style MIDDLEWARE
> response handling. I guess a subclass of anything that uses
> @override_settings(MIDDLEWARE=...) that uses
> @override_settings(MIDDLEWARE_CLASSES=...) might work. Just putting
> it out there in case anyone has a better idea.
> 
> On Monday, January 18, 2016 at 9:20:03 PM UTC-5, Carl Meyer wrote:
> 
> I've updated DEP 5 with a new round of clarifications and tweaks
> based on the most recent feedback:
> https://github.com/django/deps/compare/62b0...master
> 
> 
> Carl
> 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to django-developers+unsubscr...@googlegroups.com
> .
> To post to this group, send email to django-developers@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/django-developers.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-developers/eb1cf3f4-c021-40f6-be65-35427b2bf5c5%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572D2FD8.8060804%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Andrew Godwin
On Fri, May 6, 2016 at 2:11 PM, Carl Meyer  wrote:

>
> On 05/06/2016 02:31 PM, Andrew Godwin wrote:
> >
> > On Fri, May 6, 2016 at 1:19 PM, Carl Meyer  > > wrote:
> >
> > On 05/06/2016 01:56 PM, Donald Stufft wrote:
> > > User level code would not be handling WebSockets asynchronously,
> that
> > > would be left up to the web server (which would call the user
> level code
> > > using deferToThread each time a websocket frame comes in).
> Basically
> > > similar to what’s happening now, except instead of using the
> network and
> > > a queue to allow calling sync user code from an async process, you
> just
> > > use the primitives provided by the async framework.
> >
> > I think (although I haven't looked at it carefully yet) you're
> basically
> > describing the approach taken by hendrix [1]. I'd be curious,
> Andrew, if
> > you considered a thread-based approach as an option and rejected it?
> It
> > does seem like, purely on the accessibility front, it is perhaps even
> > simpler than Channels (just in terms of how many services you need to
> > deploy).
> >
> > Well, the thread-based approach is in channels; it's exactly how
> > manage.py runserver works (it starts daphne and 4 workers in their own
> > threads, and ties them together with the in-memory backend).
> >
> > So, yes, I considered it, and implemented it! I just didn't think it was
> > enough to have just that solution, which means some of the things a
> > local-memory-only backend could have done (like more detailed operations
> > on channels) didn't go in the API.
>
> Ha! Clearly I need to go have a play with channels. It does seem to me
> that this is a strong mark in favor of channels on the accessibility
> front that deserves more attention than it's gotten here: that the
> in-memory backend with threads could be a reasonable way to set up even
> a production deployment of many small sites that want websockets and
> delayed tasks without requiring separate management of interface
> servers, Redis, and workers (or separate WSGI and async servers). Of
> course it has the downside that thread-safety becomes an issue, but
> people have been deploying Django under mod_wsgi with threaded workers
> for years, so that's not exactly new.
>
> Of course, there's still internally a message bus between the server and
> the workers, so this isn't exactly the approach Donald was preferring;
> it still comes with some of the tradeoffs of using a message queue at
> all, rather than having the async server just making its own decisions
> about allocating requests to threads.
>

Yup, that's definitely the tradeoff of this approach; it's not quite as
intelligent as a more direct solution could be. With an in-memory backend,
however, you can take the channel capacity down pretty low to provide
quicker backpressure to at least get _some_ of that back.

(Another thing I should mention - with the IPC backend, you could run an
asyncio interface server on Python 3 and keep running your legacy business
logic on a Python 2 worker, all on the same machine using speedy shared
memory to communicate)

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1uor66Sc78rjdeGOumrvEH5fXxKeNjPs8p8Ni0C0uz8%3DYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Andrew Godwin
On Fri, May 6, 2016 at 1:19 PM, Carl Meyer  wrote:

> On 05/06/2016 01:56 PM, Donald Stufft wrote:
> > User level code would not be handling WebSockets asynchronously, that
> > would be left up to the web server (which would call the user level code
> > using deferToThread each time a websocket frame comes in). Basically
> > similar to what’s happening now, except instead of using the network and
> > a queue to allow calling sync user code from an async process, you just
> > use the primitives provided by the async framework.
>
> I think (although I haven't looked at it carefully yet) you're basically
> describing the approach taken by hendrix [1]. I'd be curious, Andrew, if
> you considered a thread-based approach as an option and rejected it? It
> does seem like, purely on the accessibility front, it is perhaps even
> simpler than Channels (just in terms of how many services you need to
> deploy).


Well, the thread-based approach is in channels; it's exactly how manage.py
runserver works (it starts daphne and 4 workers in their own threads, and
ties them together with the in-memory backend).

So, yes, I considered it, and implemented it! I just didn't think it was
enough to have just that solution, which means some of the things a
local-memory-only backend could have done (like more detailed operations on
channels) didn't go in the API.

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1ur-BTD4Gsvgizjw1p5UqpaQvKZH%3DXe_05TRwJLO1e-8FQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Carl Meyer
On 05/06/2016 01:56 PM, Donald Stufft wrote:
> User level code would not be handling WebSockets asynchronously, that
> would be left up to the web server (which would call the user level code
> using deferToThread each time a websocket frame comes in). Basically
> similar to what’s happening now, except instead of using the network and
> a queue to allow calling sync user code from an async process, you just
> use the primitives provided by the async framework.

I think (although I haven't looked at it carefully yet) you're basically
describing the approach taken by hendrix [1]. I'd be curious, Andrew, if
you considered a thread-based approach as an option and rejected it? It
does seem like, purely on the accessibility front, it is perhaps even
simpler than Channels (just in terms of how many services you need to
deploy).

Carl

  [1] http://hendrix.readthedocs.io/en/latest

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572CFC63.9040308%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Aymeric Augustin
> On 06 May 2016, at 21:56, Donald Stufft  wrote:
> 
>> On May 6, 2016, at 3:49 PM, Aymeric Augustin 
>> > > wrote:
>> 
>> Sure, this works for WSGI, but barring significant changes to Django, it 
>> doesn’t make it convenient to handle WSGI synchronously and WebSockets 
>> asynchronously with the same code base, let alone in the same process.
> 
> User level code would not be handling WebSockets asynchronously, that would 
> be left up to the web server (which would call the user level code using 
> deferToThread each time a websocket frame comes in). Basically similar to 
> what’s happening now, except instead of using the network and a queue to 
> allow calling sync user code from an async process, you just use the 
> primitives provided by the async framework.

Ah, right! I think this would be quite similar to a synchronous, in-memory 
channels backends.

-- 
Aymeric.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/EA79C789-E486-4073-8961-06E1F350F405%40polytechnique.org.
For more options, visit https://groups.google.com/d/optout.


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Donald Stufft

> On May 6, 2016, at 3:49 PM, Aymeric Augustin 
>  wrote:
> 
> Sure, this works for WSGI, but barring significant changes to Django, it 
> doesn’t make it convenient to handle WSGI synchronously and WebSockets 
> asynchronously with the same code base, let alone in the same process.

User level code would not be handling WebSockets asynchronously, that would be 
left up to the web server (which would call the user level code using 
deferToThread each time a websocket frame comes in). Basically similar to 
what’s happening now, except instead of using the network and a queue to allow 
calling sync user code from an async process, you just use the primitives 
provided by the async framework.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CDDB586D-A787-461D-99E2-483A90E2572A%40stufft.io.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Aymeric Augustin

> On 06 May 2016, at 19:59, Donald Stufft  wrote:
> 
>> On May 6, 2016, at 1:45 PM, Andrew Godwin > > wrote:
>> 
>> On Fri, May 6, 2016 at 9:11 AM, Donald Stufft > > wrote:
>> 
>> So what sort of solution would I personally advocate had I the time or energy
>> to do so? I would look towards what sort of pure Python API (like WSGI 
>> itself)
>> could be added to allow a web server to pass websockets down into Django.
>> 
>> I agree with the want to use things like HAProxy in the stack, but I think 
>> your idea of handling WebSockets natively in Django is far more difficult 
>> and fragile than Channels is, mostly due to our ten-year history of 
>> synchronous code. We would have to audit a large amount of the codebase to 
>> ensure it was all async compatible, not to mention drop python 2 suport, 
>> before we'd even get close.
> 
> You don’t need to write it asynchronously. You need an async server but that 
> async server can execute synchronous code just fine using something like 
> deferToThread. That’s how twistd -n web —wsgi works today. It gets a request 
> and it deferToThread’s it to synchronous WSGI code.

Sure, this works for WSGI, but barring significant changes to Django, it 
doesn’t make it convenient to handle WSGI synchronously and WebSockets 
asynchronously with the same code base, let alone in the same process.

Problems begin when you want a synchronous function and an asynchronous one to 
call the same function that does I/O, for example `get_session(session_id)` or 
`get_current_user(user_id)`. Every useful service serving authenticated users 
starts with these.

If you’re very careful to never mix sync and async code, sure, it will work. It 
will be unforgiving, in the sense that it will be too easy to accidentally 
block the event loop handling the async bits. In the end, essentially, you end 
up writing two separate apps… and it's harder than writing actually them 
separately.

That’s why I’m pessimistic about running everything on an event loop as long as 
we don’t have a way to guarantee that Django never blocks.

-- 
Aymeric.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/126C7C2E-C14F-4E29-BCB6-539F72AF35DA%40polytechnique.org.
For more options, visit https://groups.google.com/d/optout.


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Donald Stufft

> On May 6, 2016, at 1:45 PM, Andrew Godwin  wrote:
> 
> Want to just cover a few more things I didn't in my reply to Aymeric.
> 
> On Fri, May 6, 2016 at 9:11 AM, Donald Stufft  > wrote:
> 
> In short, I think that the message bus adds an additional layer of complexity
> that makes everything a bit more complex and complicated for very little 
> actual
> gain over other possible, but less complex solutions. This message bus also
> removes a key part of the amount of control that the server which is 
> *actually*
> receiving the connection has over the lifetime and process of the eventual
> request.
> 
> True; however, having a message bus/channel abstraction also removes a layer 
> of complexity that is caring about socket handling and sinking your 
> performance by even doing a slightly blocking operation.
> 
> In an ideal world we'd have some magical language that let us all write 
> amazing async code and that detected all possible deadlocks or livelocks 
> before they happened, but that's not yet the case, and I think the worker 
> model has been a good substitute for it in software design generally.
> 
> 
> For an example, in traditional HTTP servers where you have an open connection
> associated with whatever view code you're running whenever the client
> disconnects you're given a few options of what you can do, but the most common
> option in my experience is that once the connection has been lost the HTTP
> server cancels the execution of whatever view code it had been running [1].
> This allows a single process to serve more by shedding the load of connections
> that have since been disconnected for some reason, however in ASGI since
> there's no way to remove an item from the queue or cancel it once it has begun
> to be processed by a worker proccess you lose out on this ability to shed the
> load of processing a request once it has already been scheduled.
> 
> But as soon as you introduce a layer like Varnish into the equation, you've 
> lost this anyway, as you're no longer seeing the true client socket. 
> Abandoned requests are an existent problem with HTTP and WSGI; I see them in 
> our logs all the time.


I don’t believe that to be true. For example: The client connects to Varnish, 
Varnish connects to h2o, h2o connections to gunciorn which is running WSGI. The 
client closes the connection to Varnish, so Varnish closes the connection to 
h2o, so h2o closes the connection to gunicorn who can then throw a SystemExit 
exception and halt execution of the code.

> 
> 
> This additional complexity incurred by the message bus also ends up requiring
> additional complexity layered onto ASGI to try and re-invent some of the
> "natural" features of TCP and/or HTTP (or whatever the underlying protocol 
> is).
> An example of this would be the ``order`` keyword in the WebSocket spec,
> something that isn't required and just naturally happens whenever you're
> directly connected to a websocket because the ``order`` is just whatever bytes
> come in off the wire. This also gets exposed in other features, like
> backpressure where ASGI didn't currently have a concept of allowing the queue
> to apply back pressure to the web connection but now Andrew has started to 
> come
> around to the idea of adding a bounding to the queue (which is good!) but if
> the indirection of the message bus hadn't been added, then backpressure would
> have naturally occurred whenever you ended up getting enough things processing
> that it blocked new connections from being ``accept``d which would eventually
> end up filling up the backlog and then making new connections hang block
> waiting to connect. Now it's good that Andrew is adding the ability to bound
> the queue, but that is something that is going to require care to tune in each
> individual deployment (and will need regularly re-evaluated) rather than
> something that just occurs naturally as a consequence of the design of the
> system.
> 
> Client buffers in OSs were also manually tuned to begin with; I suspect we 
> can hone in on how to make this work best over time once we have more 
> experience with how it runs in the wild. I don't disagree that I'm 
> reinventing existing features of TCP sockets, but it's also a mix of UDP 
> features too; there's a reason a lot of modern protocols back onto UDP 
> instead of TCP, and I'm trying to strike the balance.
> 
> 
> Anytime you add a message bus you need to make a few trade offs, the 
> particular
> trade off that ASGI made is that it should prefer "at most once" delivery of
> messages and low latency to guaranteed delivery. This choice is likely one of
> the sanest ones you can make in regards to which trade offs you make for the
> design of ASGI, but in that trade off you end up with new problems that don't
> exist otherwise. For example, HTTP/1 has the concept of pipelining which 
> allows
> you to make several HTTP requests on a single HTTP 

Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Andrew Godwin
Want to just cover a few more things I didn't in my reply to Aymeric.

On Fri, May 6, 2016 at 9:11 AM, Donald Stufft  wrote:
>
>
> In short, I think that the message bus adds an additional layer of
> complexity
> that makes everything a bit more complex and complicated for very little
> actual
> gain over other possible, but less complex solutions. This message bus also
> removes a key part of the amount of control that the server which is
> *actually*
> receiving the connection has over the lifetime and process of the eventual
> request.
>

True; however, having a message bus/channel abstraction also removes a
layer of complexity that is caring about socket handling and sinking your
performance by even doing a slightly blocking operation.

In an ideal world we'd have some magical language that let us all write
amazing async code and that detected all possible deadlocks or livelocks
before they happened, but that's not yet the case, and I think the worker
model has been a good substitute for it in software design generally.


>
> For an example, in traditional HTTP servers where you have an open
> connection
> associated with whatever view code you're running whenever the client
> disconnects you're given a few options of what you can do, but the most
> common
> option in my experience is that once the connection has been lost the HTTP
> server cancels the execution of whatever view code it had been running [1].
> This allows a single process to serve more by shedding the load of
> connections
> that have since been disconnected for some reason, however in ASGI since
> there's no way to remove an item from the queue or cancel it once it has
> begun
> to be processed by a worker proccess you lose out on this ability to shed
> the
> load of processing a request once it has already been scheduled.
>

But as soon as you introduce a layer like Varnish into the equation, you've
lost this anyway, as you're no longer seeing the true client socket.
Abandoned requests are an existent problem with HTTP and WSGI; I see them
in our logs all the time.


>
> This additional complexity incurred by the message bus also ends up
> requiring
> additional complexity layered onto ASGI to try and re-invent some of the
> "natural" features of TCP and/or HTTP (or whatever the underlying protocol
> is).
> An example of this would be the ``order`` keyword in the WebSocket spec,
> something that isn't required and just naturally happens whenever you're
> directly connected to a websocket because the ``order`` is just whatever
> bytes
> come in off the wire. This also gets exposed in other features, like
> backpressure where ASGI didn't currently have a concept of allowing the
> queue
> to apply back pressure to the web connection but now Andrew has started to
> come
> around to the idea of adding a bounding to the queue (which is good!) but
> if
> the indirection of the message bus hadn't been added, then backpressure
> would
> have naturally occurred whenever you ended up getting enough things
> processing
> that it blocked new connections from being ``accept``d which would
> eventually
> end up filling up the backlog and then making new connections hang block
> waiting to connect. Now it's good that Andrew is adding the ability to
> bound
> the queue, but that is something that is going to require care to tune in
> each
> individual deployment (and will need regularly re-evaluated) rather than
> something that just occurs naturally as a consequence of the design of the
> system.
>

Client buffers in OSs were also manually tuned to begin with; I suspect we
can hone in on how to make this work best over time once we have more
experience with how it runs in the wild. I don't disagree that I'm
reinventing existing features of TCP sockets, but it's also a mix of UDP
features too; there's a reason a lot of modern protocols back onto UDP
instead of TCP, and I'm trying to strike the balance.


>
> Anytime you add a message bus you need to make a few trade offs, the
> particular
> trade off that ASGI made is that it should prefer "at most once" delivery
> of
> messages and low latency to guaranteed delivery. This choice is likely one
> of
> the sanest ones you can make in regards to which trade offs you make for
> the
> design of ASGI, but in that trade off you end up with new problems that
> don't
> exist otherwise. For example, HTTP/1 has the concept of pipelining which
> allows
> you to make several HTTP requests on a single HTTP connection without
> waiting
> for the responses before sending each one. Given the nature of ASGI it
> would be
> very difficult to actually support this feature without either violating
> the
> RFC or forcing either Daphne or the queue to buffer potentially huge
> responses
> while it waits for another request that came before it to be finished
> whereas
> again you get this for free using either async IO (you just don't await the
> result of that second request until the first request has 

Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Carl Meyer
On 05/06/2016 11:09 AM, Aymeric Augustin wrote:
> I think it's important to keep a straightforward WSGI backend in case we crack
> this problem and build an async story that depends on asyncio after dropping
> support for Python 2.
> 
> I don't think merging channels as it currently stands hinders this possibility
> in any way, on the contrary. The more Django is used for serving HTTP/2 and
> websockets, the more we can learn.

This summarizes my feelings about merging channels. It feels a bit
experimental to me, and I'm not yet convinced that I'd choose to use it
myself (but I'd be willing to try it out). As long as it's marked as
provisional for now and we maintain straight WSGI as an option, so
nobody's forced into it, we can maybe afford to experiment and learn
from it.

ISTM that the strongest argument in favor is that I think it _is_
significantly easier for a casual user to build and deploy their first
websockets app using Channels than using any other currently-available
approach with Django. Both channels and Django+whatever-async-server
require managing multiple servers, but channels makes a lot of decisions
for you and makes it really easy to keep all your code together. And (as
long as we still support plain WSGI) it doesn't remove the flexibility
for more advanced users who prefer different tradeoffs to still choose
other approaches. There's a lot to be said for that combination of
"accessible for the new user, still flexible for the advanced user", IMO.

Carl

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572CD4B8.6010800%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Andrew Godwin
On Fri, May 6, 2016 at 10:09 AM, Aymeric Augustin <
aymeric.augus...@polytechnique.org> wrote:

> Hello Donald, all,
>
> Some thoughts inline below.
>
> > On 06 May 2016, at 18:11, Donald Stufft  wrote:
> >
> > For an example, in traditional HTTP servers where you have an open
> connection
> > associated with whatever view code you're running whenever the client
> > disconnects you're given a few options of what you can do, but the most
> common
> > option in my experience is that once the connection has been lost the
> HTTP
> > server cancels the execution of whatever view code it had been running
> [1].
> > This allows a single process to serve more by shedding the load of
> connections
> > that have since been disconnected for some reason, however in ASGI since
> > there's no way to remove an item from the queue or cancel it once it has
> begun
> > to be processed by a worker proccess you lose out on this ability to
> shed the
> > load of processing a request once it has already been scheduled.
>
> In theory this effect is possible. However I don't think it will make a
> measurable difference in practice. A Python server will usually process
> requests quickly and push the response to a reverse-proxy. It should have
> finished to process the request by the time it's reasonable to assume the
> client has timed-out.
>
> This would only be a problem when serving extremely large responses in
> Python,
> which is widely documented as a performance anti-pattern that must be
> avoided
> at all costs. So if this effect happens, you have far worse problems :-)
>

I will also point out that I've introduced channel capacity and
backpressure into the ASGI spec now (it's in three of the four backends,
soon to be in the fourth) to help combat some of this problem, specifically
relating to an overload of requests or very slow response readers.


>
>
> > This additional complexity incurred by the message bus also ends up
> requiring
> > additional complexity layered onto ASGI to try and re-invent some of the
> > "natural" features of TCP and/or HTTP (or whatever the underlying
> protocol is).
> > An example of this would be the ``order`` keyword in the WebSocket spec,
> > something that isn't required and just naturally happens whenever you're
> > directly connected to a websocket because the ``order`` is just whatever
> bytes
> > come in off the wire.
>
> I'm somewhat concerned by this risk. Out-of-order processing of messages
> coming from a single connection could cause surprising bugs. This is
> likely one
> of the big tradeoffs of the async-to-sync conversion channels operates. I
> assume it will have to be documented.
>
> Could someone confirm that this doesn't happen for regular HTTP/1.1
> requests?
> I suppose channels encodes each HTTP/1.1 request as a single message.
>

Yes, it encodes each request as a single main message, and the request body
(if large enough) is chunked onto a separate "body" channel for that
specific request; since only one reader touches that channel, it will get
the messages in-order.

It is unfortunate that in-order processing requires a bit more work, but
the alternative is having to sticky WebSocket connections to a single
worker server, which is not great and kind of defeats the point of having a
system like this.

I'd also like to point out that if a site has a very complex WebSocket
protocol I would likely encourage them to write their own interface server
to move some of the more order-sensitive logic closer to the client, and
then just have that code generate higher-level events back into Django;
Channels is very much a multi-protocol system, not just for WebSockets and
HTTP.


>
> Note that out of order processing is already possible without channels e.g.
> due to network latency or high load on a worker.
>
> The design of channels seems similar to HTTP/2 — a bunch of messages sent
> in
> either direction with no pretense to synchronize communications. This is a
> scary model but I guess we'll have to live with it anyway...
>

Yes, it's pretty similar to HTTP/2, which is not entirely a mistake. If
you're going to take the step and separate the processes out, I think this
model is the most reasonable one to take.


>
> Does anyone know if HTTP/2 allows sending responses out of order? This
> would
> make sub-optimal handling of HTTP/1.1 pipelining less of a concern going
> forwards. We could live with a less efficient implementation.
>

It does; you can send responses in any order you like as long as you
already got the request matching it. You can also push other requests _to
the client_ with their own premade responses before you send a main
response (Server Push).


>
>
> > I believe the introduction of a message bus here makes things inherently
> more
> > fragile. In order to reasonable serve web sockets you're now talking
> about a
> > total of 3 different processes that need to be run (Daphne, Redis, and
> Django)
> > each that will exhibit it's own 

Re: My Take on Django Channels

2016-05-06 Thread Carl Meyer
Hi Andrew,

Replying off-list just to say that I totally understand your frustration
here, and I wish I weren't contributing to it :( I hope I'm managing to
speak my mind without being an asshole about it, and I hope you'd tell
me if I failed.

Really glad Jacob stepped up on the DEP; I was thinking when I wrote my
last email that that'd be the ideal solution, but I didn't want to put
anyone else on the spot (I almost volunteered myself, but I don't think
I'm quite sold enough yet to be a good advocate). I hope the
conversation on the technical topics goes well (I think you've been
doing a great job with it so far on these latest threads) and a
satisfactory resolution is reached in time for a merge into 1.10.

Carl

> I think you're entirely right, Carl - I'm just getting frustrated with
> myself at this point for not realising sooner and trying to find ways to
> not do it - people only pay real attention to a patch as you're close to
> merging and emotionally invested in it, and it's a little exasperating. 
> 
> Jacob has graciously stepped in to help write one, and I am going to
> have a much-needed evening off from doing Channels stuff, I haven't had
> a break in a while.


-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/572CD1C3.2090305%40oddbird.net.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Aymeric Augustin
Hello Donald, all,

Some thoughts inline below.

> On 06 May 2016, at 18:11, Donald Stufft  wrote:
> 
> For an example, in traditional HTTP servers where you have an open connection
> associated with whatever view code you're running whenever the client
> disconnects you're given a few options of what you can do, but the most common
> option in my experience is that once the connection has been lost the HTTP
> server cancels the execution of whatever view code it had been running [1].
> This allows a single process to serve more by shedding the load of connections
> that have since been disconnected for some reason, however in ASGI since
> there's no way to remove an item from the queue or cancel it once it has begun
> to be processed by a worker proccess you lose out on this ability to shed the
> load of processing a request once it has already been scheduled.

In theory this effect is possible. However I don't think it will make a
measurable difference in practice. A Python server will usually process
requests quickly and push the response to a reverse-proxy. It should have
finished to process the request by the time it's reasonable to assume the
client has timed-out.

This would only be a problem when serving extremely large responses in Python,
which is widely documented as a performance anti-pattern that must be avoided
at all costs. So if this effect happens, you have far worse problems :-)


> This additional complexity incurred by the message bus also ends up requiring
> additional complexity layered onto ASGI to try and re-invent some of the
> "natural" features of TCP and/or HTTP (or whatever the underlying protocol 
> is).
> An example of this would be the ``order`` keyword in the WebSocket spec,
> something that isn't required and just naturally happens whenever you're
> directly connected to a websocket because the ``order`` is just whatever bytes
> come in off the wire.

I'm somewhat concerned by this risk. Out-of-order processing of messages
coming from a single connection could cause surprising bugs. This is likely one
of the big tradeoffs of the async-to-sync conversion channels operates. I
assume it will have to be documented.

Could someone confirm that this doesn't happen for regular HTTP/1.1 requests?
I suppose channels encodes each HTTP/1.1 request as a single message.

Note that out of order processing is already possible without channels e.g.
due to network latency or high load on a worker.

The design of channels seems similar to HTTP/2 — a bunch of messages sent in
either direction with no pretense to synchronize communications. This is a
scary model but I guess we'll have to live with it anyway...


> Anytime you add a message bus you need to make a few trade offs, the 
> particular
> trade off that ASGI made is that it should prefer "at most once" delivery of
> messages and low latency to guaranteed delivery.

That’s already what happens today, especially on mobile connections. Many
requests or responses don’t get delivered. And it isn’t even a trade-off
against speed.


> This choice is likely one of
> the sanest ones you can make in regards to which trade offs you make for the
> design of ASGI, but in that trade off you end up with new problems that don't
> exist otherwise. For example, HTTP/1 has the concept of pipelining which 
> allows
> you to make several HTTP requests on a single HTTP connection without waiting
> for the responses before sending each one. Given the nature of ASGI it would 
> be
> very difficult to actually support this feature without either violating the
> RFC or forcing either Daphne or the queue to buffer potentially huge responses
> while it waits for another request that came before it to be finished whereas
> again you get this for free using either async IO (you just don't await the
> result of that second request until the first request has been processed) or
> with WSGI if you're using generators (you just don't iterate over the result
> until you're ready for it).

In this case, daphne forwarding to channels seems to be exactly in the same
position than, say, nginx forwarding to gunicorn. At worst, daphne can just
wait until a response is sent before passing the next request in the pipeline
to channels. At best, it can be smarter.

Besides I think pipelining is primarily targeted at static content which
shouldn't be served through Django in general.

Does anyone know if HTTP/2 allows sending responses out of order? This would
make sub-optimal handling of HTTP/1.1 pipelining less of a concern going
forwards. We could live with a less efficient implementation.

Virtually nothing done with Django returns a generator, except pathological
cases that should really be implemented differently (says the guy who wrote
StreamingHttpResponse and never actually used it). So I’m not exceedingly
concerned about this use case. It should work, though, even if it’s slow.


> I believe the introduction of a message bus here makes things 

Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

2016-05-06 Thread Donald Stufft
Let me just start out saying that I think that ASGI is reasonably designed for
the pattern that it is attempting to produce. That being said, I am of the
belief that the fundamental way that ASGI is designed to work misses the mark
for the kind of feature that people should be using in the general case.

First here are the general assumptions that I have from my readings of the ASGI
spec and the code that I've looked at. It's entirely possible that I've missed
something or I'm viewing some component of it incorrectly since I have not
steeped myself in ASGI so I figure it'd be useful to give a run down of my
mental model of ASGI.

Some sort of connection comes in on an edge server (Daphne in this case)
which is written in such a way as to be highly concurrent (likely to be
async in some fashion). From there it takes the incoming connection, parses
it and turns it into a message (or many messages for chunked encoding or
websockets). Once it turns it into a message it pushes it into some sort of
queue where you have a number of readers pulling messages off that queue,
processing it, and then putting some kind of response back on a different
queue where the original edge server will be listening and can pull that
off the queue, turn it back into whatever format the the original
connection expected on the wire and send it off.

This has a number of purported benefits such as:

* Providing a mechanism for websockets in Django (this is the big one).
* Allowing background tasks to be written and run in the same process as
  Django.
* Making it easier for people to do graceful restarts of their code base.
* Support for long polling (since the HTTP connection only stays open in the
  async thread).
 * Doing all of the above, while still being able to write sync code in Django.

Ok, so now let's break down why I don't personally like the fundamentals of
what ASGI is and why I don't see myself ever using it or wanting to use it.

In short, I think that the message bus adds an additional layer of complexity
that makes everything a bit more complex and complicated for very little actual
gain over other possible, but less complex solutions. This message bus also
removes a key part of the amount of control that the server which is *actually*
receiving the connection has over the lifetime and process of the eventual
request.

For an example, in traditional HTTP servers where you have an open connection
associated with whatever view code you're running whenever the client
disconnects you're given a few options of what you can do, but the most common
option in my experience is that once the connection has been lost the HTTP
server cancels the execution of whatever view code it had been running [1].
This allows a single process to serve more by shedding the load of connections
that have since been disconnected for some reason, however in ASGI since
there's no way to remove an item from the queue or cancel it once it has begun
to be processed by a worker proccess you lose out on this ability to shed the
load of processing a request once it has already been scheduled.

This additional complexity incurred by the message bus also ends up requiring
additional complexity layered onto ASGI to try and re-invent some of the
"natural" features of TCP and/or HTTP (or whatever the underlying protocol is).
An example of this would be the ``order`` keyword in the WebSocket spec,
something that isn't required and just naturally happens whenever you're
directly connected to a websocket because the ``order`` is just whatever bytes
come in off the wire. This also gets exposed in other features, like
backpressure where ASGI didn't currently have a concept of allowing the queue
to apply back pressure to the web connection but now Andrew has started to come
around to the idea of adding a bounding to the queue (which is good!) but if
the indirection of the message bus hadn't been added, then backpressure would
have naturally occurred whenever you ended up getting enough things processing
that it blocked new connections from being ``accept``d which would eventually
end up filling up the backlog and then making new connections hang block
waiting to connect. Now it's good that Andrew is adding the ability to bound
the queue, but that is something that is going to require care to tune in each
individual deployment (and will need regularly re-evaluated) rather than
something that just occurs naturally as a consequence of the design of the
system.

Anytime you add a message bus you need to make a few trade offs, the particular
trade off that ASGI made is that it should prefer "at most once" delivery of
messages and low latency to guaranteed delivery. This choice is likely one of
the sanest ones you can make in regards to which trade offs you make for the
design of ASGI, but in that trade off you end up with new problems that don't
exist otherwise. For example, HTTP/1 has the concept of pipelining which allows
you 

Re: My Take on Django Channels

2016-05-06 Thread Ryan Hiebert

> On May 6, 2016, at 7:21 AM, Mark Lavin  wrote:
> 
> Ryan,
> 
> Sorry if you felt I was ignoring your reply to focus on the discussion with 
> Andrew. You both made a lot of the same points at about the same time but I 
> did want to touch on a couple things.

I totally get it. Focus on the Jedi, not the Padawan.
> 
> On Thursday, May 5, 2016 at 4:21:59 PM UTC-4, Ryan Hiebert wrote:
> [snip] Anything that doesn't use celery's `acks_late` is a candidate, because 
> in those cases even Celery doesn't guarantee delivery, and ASGI is a simpler 
> interface than the powerful, glorious behemoth that is Celery.  
> 
> This isn't the place for a long discussion about the inner workings of Celery 
> but I don't believe this is true. [snip]

I just meant them to be _candidates_ for being able to use a less reliable 
channel. I've got lots of non-acks-late stuff that I couldn't use channels for. 
No need for further discussion, I just want to point out that I think we're 
(nearly, at least) on the same page here. You're right that I misspoke when 
saying it doesn't guarantee delivery, but the end result is similar if the 
worker gets lost.
> 
> 
> All of the examples I've seen have pushed all HTTP requests through Redis. I 
> think some of the take-aways from this conversation will be to move away from 
> that and recommend Channels primarily for websockets and not for WSGI 
> requests.

He's talking now about having a inter-process channel, which doesn't cross 
system boundaries, so it alleviates my concerns. For my cases the latency will 
be good enough if we just avoid the machine hopping.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/62DF2295-3D2B-4FA3-A274-4087760CE310%40ryanhiebert.com.
For more options, visit https://groups.google.com/d/optout.


Re: My Take on Django Channels

2016-05-06 Thread Mark Lavin
Ryan,

Sorry if you felt I was ignoring your reply to focus on the discussion with 
Andrew. You both made a lot of the same points at about the same time but I 
did want to touch on a couple things.

On Thursday, May 5, 2016 at 4:21:59 PM UTC-4, Ryan Hiebert wrote:
>
> Thank you, Mark, for starting this discussion. I, too, found myself simply 
> accepting that channels was the right way to go, despite having the same 
> questions you do. I realize this shouldn't be, so I've chimed in on some of 
> your comments. 
>
> > On May 5, 2016, at 2:34 PM, Mark Lavin  
> wrote: 
> > 
> > [snip] 
> > 
> > The Channel API is built more around a simple queue/list rather than a 
> full messaging layer. [snip] Kombu supports  [snip]. 
>
> The API was purposefully limited, because channels shouldn't need all 
> those capabilities. All this is spelled out in the documentation, which I 
> know you already understand because you've mentioned it elsewhere. I think 
> that the choice to use a more limited API makes sense, though that doesn't 
> necessarily mean that it is the right choice. 
> > 
> > [snip description of architecture] 
>
> First off, the concerns you mention make a lot of sense to me, and I've 
> been thinking along the same lines. 
>
> I've been considering if having an alternative to Daphne that only used 
> channels for websockets, but used WSGI for everything else. Or some 
> alternative split where some requests would be ASGI and some WSGI. I've 
> tested a bit the latency overhead that using channels adds (on my local 
> machine even), and it's not insignificant. I agree that finding a solution 
> that doesn't so drastically slow down the requests that we've already 
> worked hard to optimize is important. I'm not yet sure the right way to do 
> that. 
>
> As far as scaling, it is apparent to me that it will be very important to 
> have the workers split out, in a similar way to how we have different 
> celery instances processing different queues. This allows us to scale those 
> queues separately. While it doesn't appear to exist in the current 
> implementation, the channel names are obviously suited to such a split, and 
> I'd expect channels to grow the feature of selecting which channels a 
> worker should be processing (forgive me if I've just missed this 
> capability, Andrew). 
>

Similar to Celery, the workers can listen on only certain channels or 
exclude listening on channels which is sort of a means of doing 
priority https://github.com/andrewgodwin/channels/issues/116 I would also 
like to see this expanded or more have the use case more clearly documented.
 

> > 
> > [[ comments on how this makes deployment harder ]] 
>
> ASGI is definitely more complex that WSGI. It's this complexity that gives 
> it power. However, to the best of my knowledge, there's not a push to be 
> dropping WSGI. If you're doing a simple request/response site, then you 
> don't need the complexity, and you probably should be using WSGI. However, 
> if you need it, having ASGI standardized in Django will help the community 
> build on the power that it brings. 

> 
> > Channels claims to have a better zero-downtime deployment story. 
> However, in practice I’m not convinced that will be true. [snip] 
>
> I've been concerned about this as well. On Heroku my web dynos don't go 
> down, because the new ones are booted up while the old ones are running, 
> and then a switch is flipped to have the router use the new dynos. Worker 
> dynos, however, do get shut down. Daphne won't be enough to keep my site 
> functioning. This is another reason I was thinking of a hybrid WSGI/ASGI 
> server. 
> > 
> > There is an idea floating around of using Channels for background 
> jobs/Celery replacement. It is not/should not be. [snip reasons] 
>
> It's not a Celery replacement. However, this simple interface may be good 
> enough for many things. Anything that doesn't use celery's `acks_late` is a 
> candidate, because in those cases even Celery doesn't guarantee delivery, 
> and ASGI is a simpler interface than the powerful, glorious behemoth that 
> is Celery.  


This isn't the place for a long discussion about the inner workings of 
Celery but I don't believe this is true. The prefetched tasks are not 
acknowledged until they are delivered to a worker for processing. Once 
delivered, the worker might die/be killed before it can complete the task 
but the message was delivered. That's the gap that acks_late solves: 
between the message delivery and the completion of the task. Not all 
brokers support message acknowledgement natively and so that feature is 
emulated which could lead to prefetched message loss or delay. I've 
certainly seen this when using Redis as the broker but never with RabbitMQ 
which has native support for acknowledgement.
 

> There's an idea that something like Celery could be built on top of it. 
> That may or may not be a good idea, since Celery uses native protocol 
> features of 

Re: My Take on Django Channels

2016-05-06 Thread Mark Lavin
Yes I agree that we do want different things and have different goals. 
There is nothing wrong with coming to a state of respectful disagreement. 
I'm glad that some of the feedback could be helpful and I hope it can be 
incorporated into Channels.

As for a DEP, that would be nice and I'd love to participate in that 
process. To this point I don't feel like the argument for Channels has been 
weighed against existing alternative approaches which is largely what I've 
tried to start here. I mention the DEP process as a source of my own 
resentment for this change and part of the reason I've held this feedback 
in for so long. Again I don't think that was fair to you or the Django 
community to do so. You've been open about your work and your goals. I had 
plenty of opportunity to voice my concern to you publicly or privately and 
I chose not to do so for arguably petty reasons. I don't want to see this 
work blocked because of a lack of DEP if it has the support of the core 
team and the larger community. I've said my piece about this work and I'm 
letting those past feelings go so that I can contribute more constructively 
to this conversation.

- Mark

On Thursday, May 5, 2016 at 8:52:17 PM UTC-4, Andrew Godwin wrote:
>
>
>
> On Thu, May 5, 2016 at 5:13 PM, Mark Lavin  > wrote:
>
>> Yes I agree with the value of a standardized way of communicating between 
>> these processes and I listed that as a highlight of Channels, though it 
>> quickly shifted into criticism. I think that's where we are crossing paths 
>> with relation to Kombu/AMQP as well. I find the messaging aspect of 
>> Channels far more interesting and valuable than ASGI as a larger 
>> specification. Messaging I do think needs to be network transparent. I just 
>> don't like that aspect tied into the HTTP handling. At this point I'm not 
>> sure how to decouple the messaging aspect from the HTTP layer since I feel 
>> they are very tightly bound in ASGI.
>>
>
> I see what you mean; HTTP is definitely less of a fit to ASGI than 
> WebSockets, and it wasn't even in there at all initially, but I felt that 
> the ability to unify everything inside Django to be a consumer was too 
> strong to pass up (plus the fact that it allowed long-polling HTTP which I 
> still use a lot in lieu of WebSocket support, mostly for work reasons).
>  
>
>>
>> Honestly I don't think Django *needs* tightly integrated websocket 
>> support but I do see the value in it so we aren't at a complete impasse. I 
>> suppose that's why it's my general preference to see a third-party solution 
>> gain traction before it's included. I played with integrating Django + 
>> aiohttp a few months ago. Nothing serious and I wouldn't call it an 
>> alternate proposal. It's barely a proof of concept: 
>> https://github.com/mlavin/aiodjango. My general inclination is that 
>> (insert wild hand waving) 
>> django.contrib.aiohttp/django.contrib.twisted/django.contrib.tornado would 
>> be the way forward for Django + websockets without a full scale rewrite of 
>> the WSGI specification.
>>
>>
> The other track for this was definitely to go the South route and have it 
> run externally, but based on my previous experience with that route it is 
> not scalable from a people perspective.
>
> I personally see this as something where any single third-party solution 
> is not going to gain enough traction to be tested and tried enough unless 
> it's defacto recommended by Django itself, at which point it's close to 
> being a core module with provisional status.
>
> I feel like we're never going to quite agree on the approach here; I've 
> explained my stance, you have explained yours, and I think we both have a 
> good idea where we stand. I agree with some of your concerns, especially 
> around introducing more moving parts, but then modern websites have so many 
> already my concerns are perpetually high.
>
> Given your feedback, I do want to work on a local, cross-process ASGI 
> backend and write up a full deployment story that uses WSGI servers for 
> HTTP and Daphne+worker servers for WebSockets, and have it as a top example 
> for what larger sites should do to deploy WebSockets initially; I think 
> that's an important piece of communication to show that this is only as 
> opt-in as you want it to be.
>
> I'm also hopeful that the introduction of chat, email and other protocol 
> (e.g. IoT) interface servers to further highlight the flexibility of a 
> general messaging + worker system will help move us towards a future with 
> less moving parts; ASGI and Channels was always meant to be something to be 
> built upon, a basis for making Django more capable in different arenas.
>
> Your point about the DEP process being circumvented was well made, too, 
> and I'll do my best from now on to make sure any large project I see being 
> attempted gets one in sooner rather than later.
>
> That said, though, I don't know that I can really change Channels in line 
> with