On Fri, May 6, 2016 at 2:11 PM, Carl Meyer <c...@oddbird.net> wrote:

>
> On 05/06/2016 02:31 PM, Andrew Godwin wrote:
> >
> > On Fri, May 6, 2016 at 1:19 PM, Carl Meyer <c...@oddbird.net
> > <mailto:c...@oddbird.net>> wrote:
> >
> >     On 05/06/2016 01:56 PM, Donald Stufft wrote:
> >     > User level code would not be handling WebSockets asynchronously,
> that
> >     > would be left up to the web server (which would call the user
> level code
> >     > using deferToThread each time a websocket frame comes in).
> Basically
> >     > similar to what’s happening now, except instead of using the
> network and
> >     > a queue to allow calling sync user code from an async process, you
> just
> >     > use the primitives provided by the async framework.
> >
> >     I think (although I haven't looked at it carefully yet) you're
> basically
> >     describing the approach taken by hendrix [1]. I'd be curious,
> Andrew, if
> >     you considered a thread-based approach as an option and rejected it?
> It
> >     does seem like, purely on the accessibility front, it is perhaps even
> >     simpler than Channels (just in terms of how many services you need to
> >     deploy).
> >
> > Well, the thread-based approach is in channels; it's exactly how
> > manage.py runserver works (it starts daphne and 4 workers in their own
> > threads, and ties them together with the in-memory backend).
> >
> > So, yes, I considered it, and implemented it! I just didn't think it was
> > enough to have just that solution, which means some of the things a
> > local-memory-only backend could have done (like more detailed operations
> > on channels) didn't go in the API.
>
> Ha! Clearly I need to go have a play with channels. It does seem to me
> that this is a strong mark in favor of channels on the accessibility
> front that deserves more attention than it's gotten here: that the
> in-memory backend with threads could be a reasonable way to set up even
> a production deployment of many small sites that want websockets and
> delayed tasks without requiring separate management of interface
> servers, Redis, and workers (or separate WSGI and async servers). Of
> course it has the downside that thread-safety becomes an issue, but
> people have been deploying Django under mod_wsgi with threaded workers
> for years, so that's not exactly new.
>
> Of course, there's still internally a message bus between the server and
> the workers, so this isn't exactly the approach Donald was preferring;
> it still comes with some of the tradeoffs of using a message queue at
> all, rather than having the async server just making its own decisions
> about allocating requests to threads.
>

Yup, that's definitely the tradeoff of this approach; it's not quite as
intelligent as a more direct solution could be. With an in-memory backend,
however, you can take the channel capacity down pretty low to provide
quicker backpressure to at least get _some_ of that back.

(Another thing I should mention - with the IPC backend, you could run an
asyncio interface server on Python 3 and keep running your legacy business
logic on a Python 2 worker, all on the same machine using speedy shared
memory to communicate)

Andrew

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1uor66Sc78rjdeGOumrvEH5fXxKeNjPs8p8Ni0C0uz8%3DYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to