Yes I agree with the value of a standardized way of communicating between 
these processes and I listed that as a highlight of Channels, though it 
quickly shifted into criticism. I think that's where we are crossing paths 
with relation to Kombu/AMQP as well. I find the messaging aspect of 
Channels far more interesting and valuable than ASGI as a larger 
specification. Messaging I do think needs to be network transparent. I just 
don't like that aspect tied into the HTTP handling. At this point I'm not 
sure how to decouple the messaging aspect from the HTTP layer since I feel 
they are very tightly bound in ASGI.

Honestly I don't think Django *needs* tightly integrated websocket support 
but I do see the value in it so we aren't at a complete impasse. I suppose 
that's why it's my general preference to see a third-party solution gain 
traction before it's included. I played with integrating Django + aiohttp a 
few months ago. Nothing serious and I wouldn't call it an alternate 
proposal. It's barely a proof of 
concept: https://github.com/mlavin/aiodjango. My general inclination is 
that (insert wild hand waving) 
django.contrib.aiohttp/django.contrib.twisted/django.contrib.tornado would 
be the way forward for Django + websockets without a full scale rewrite of 
the WSGI specification.

Not sure if I touched on all of your questions so please let me know if it 
seems like I'm skipping over something.

- Mark

On Thursday, May 5, 2016 at 6:31:05 PM UTC-4, Andrew Godwin wrote:
>
>
>
> On Thu, May 5, 2016 at 2:19 PM, Mark Lavin <markd...@gmail.com 
> <javascript:>> wrote:
>
>> Thank you for your comments and I have some brief replies.
>>
>>
>> If I'm understanding it correctly, groups are an emulated broadcast. I'm 
>> saying it would be an advantage for it to use pub/sub but it does not.
>>
>
> You are correct; the reason Redis pub/sub is not used is because the ASGI 
> API allows applications to not listen continuously on channels and instead 
> check in every so often, so it uses lists so there's some persistence; this 
> could be changed, though. I do want to improve the group send function so 
> it runs on Lua inside Redis rather than multi-sending from outside, however.
>  
>
>>  
>>
>>>
>>> I've always tried to be clear that it is not a Celery replacement but 
>>> instead a way to offload some non-critical task if required.
>>>
>>
>> I don't agree that this has been clear. That is my primary criticism 
>> here. I don't think this should be encouraged. Ryan's reply continues with 
>> this confusion.
>>
>
> I would love to work with you on clearing this up, then; trying to 
> communicate what the design is intended to be is one of the hardest parts 
> of this project, especially considering there are so many avenues people 
> hear about this stuff through (and the fact that I do think _some_ 
> non-critical tasks could be offloaded into channels consumers, just not the 
> sort Celery is currently used for).
>  
>
>>
>> Yes the lock-in is an exaggeration, however, given the poor 
>> support/upkeep for third-party DB backends, I doubt the community will have 
>> better luck with Channel backends not officially supported by the Django 
>> core team. I'd be happy to be wrong here.
>>
>
> Yes, that's a fair comparison. There was even an effort to try and get a 
> second one going and ready to use before merge but unfortunately it didn't 
> get anywhere yet.
>  
>
>>
>> Kombu is not to be confused with Celery. Kombu is a general purpose 
>> AMQP/messaging abstraction library. I don't think we agree on its potential 
>> role here. Perhaps it's better stated that I think Channel's minimalist API 
>> is too minimalist. I would prefer if additional AMQP-like abstractions 
>> existed such as topic routing and QoS.
>>
>
> I understand what Kombu is (though it's maintained by the Celery team from 
> what I understand, which is why I refer to them collectively). I still 
> maintain that the design of AMQP and Kombu is unsuited for what I am trying 
> to accomplish here; maybe what I am trying to accomplish is wrong, and I'm 
> happy to argue that point, but based on what I'm trying to do, AMQP and 
> similar abstractions are not a good fit - and I did write one of the 
> earlier versions of Channels on top of Celery as an experiment.
>  
>
>>
>>> ASGI is essentially meant to be an implementation of the CSP/Go style of 
>>> message-passing interprocess communication, but cross-network rather than 
>>> merely cross-thread or cross-process as I believe that network transparency 
>>> makes for a much better deployment story and the ability to build a more 
>>> resilient infrastructure.
>>>
>>
>> Again I don't agree with this argument and I don't see anything in 
>> Channels which backs up this claim. I believe this is where we likely have 
>> a fundamental disagreement. I see this network transparency as additional 
>> latency. I see the addition of the backend/broker as another moving part to 
>> break.
>>
>
> Yes, I think this is fundamentally where we disagree, and most of the 
> other points stem from this.
>
> The only solution for in-process multithreading in Python that is anywhere 
> near effective are reactor-based or greenlet-based async solutions - 
> asyncio, Twisted, gevent, etc. I don't think that, given the state and 
> trend of modern CPU and memory limitations, that we are anywhere near 
> having one process on a single core able to handle a randomly-loadbalanced 
> portion of modern site load; any one big calculation or bad request is 
> enough to bring that core down. In my opinion and experience, any single 
> thing you loadbalance to has to be capable of handling multiple large 
> requests at once, a situation we happily have today with the architecture 
> of things like uwsgi and gunicorn with worker threads/processes.
>
> Based on that already-proven model of worker threads, I then extended it 
> out to be truly multi-process (the first version of Channels had 
> machine-only interprocess communication for transport), and finally given 
> the engineering challenges involved in building a good local-only 
> interprocess layer that works successfully - a situation that ended up 
> using Redis as the local broker anyway rather than playing unstable games 
> with shared memory, files or similar - it seemed that taking it across a 
> network and letting small clusters of machines coordinate made sense, 
> especially in modern cloud hosting environments where any single machine is 
> very subject to bad-neighbour issues.
>
> You are right that it is yet another moving part, though. Would you have 
> less objection if ASGI was merely a cross-process communication interface 
> and just worked on a local machine using shared memory or the filesystem 
> (or some other local resource that worked, like maybe the UDP stack plus 
> other stuff) and required no extra server? If it was just a way of having a 
> WebSocket server and worker thread on the same machine communicating 
> without one having to directly start and interface with the other?
>  
>
>>
>> What's done is done and I don't want to start another process discussion 
>> at this point. Maybe another day. I'm doing my best to focus on the 
>> technical aspects of the proposal. That isn't to say that I'm without bias 
>> and I'm trying to own that. The fact is I have looked into Channels, the 
>> docs and the code, and I remain unconvinced this should be the blessed 
>> solution for websockets and I've tried to make it clear why. I'd much 
>> prefer to continue to run Tornado/aiohttp for the websocket process. That's 
>> not a personal attack. I just don't see Channels as a meaningful 
>> improvement over that direction.
>>
>>
> I understand, and I think you have a valid criticism. The way I see it, 
> however, is that even if people want to just keep running Tornado/aiohttp 
> for the websockets, would you not rather have a standardised way that 
> Django could run code triggered by that and send packets back into 
> websockets? Channels isn't meant to be a thing you have to buy into 
> wholesale, it's meant to be something you can just use enough of to fulfill 
> you needs, or use entirely if you want everything it provides or are 
> prototyping rapidly.
>
> Django's job as a framework is, in my opinion, to provide solutions to 
> problems our users have that work for 90% of them, and don't get in the way 
> of the other 10%. Channels won't work for everyone's needs, and people that 
> want to ignore it are free to, but we're sorely missing a solution for the 
> people who just want to develop with websockets without having to bring in 
> a separate stack to manage it.
>
> Do you have an alternate proposal for how Django should integrate 
> websocket support, or do you believe it's not the job of Django to handle 
> at all and should be left entirely to other software? I'm curious, because 
> I obviously believe Django needs to support WebSockets in core, and that 
> this is a solution to that problem, but it might be that you don't believe 
> either, in which case we are unlikely to ever agree.
>
> Andrew 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/4c67855b-2fd1-4f11-ac34-99e5e7a39564%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to