Raising this across in aio-libs for a bit of exploratory
discussion... https://groups.google.com/forum/#!topic/aio-libs/7rJ8Pb1y7aA
> I'm not sure where I sit on it - on one hand <...> on the other hand <...>
Same really, yes. :)
--
You received this message because you are subscribed to the G
>
>
> > If we end up with one format for channel names and messages that is
> spread across two consumption forms (in-process async and cross-process
> channel layers), I think that would still be a useful enough standard and
> make a lot more people happy.
>
> Yes. I'm not sure if there aren't al
> My ideal solution is one that allows both approaches, and I'd like to
investigate that further. I think you're getting closer to the sort of
thing I'm imagining with the uvcorn designs, but I feel like there's still
something a little extra that could be done so it's possible to offload
over
FuncName() and FuncNameAsync() are common patterns in .NET land with
async/await. The snake case translation would be funcname_async. From a
quick scan, the JS world hasn't settled on a convention yet, though there
is a bit of discussion about how to differentiate the names. Personally I
don't
Right - as long as you make clients deal with reconnection (which obviously
they should), then as long as your load-balancing has a way to shed
connections from a sticky server by closing them then all should be fine.
Honestly, I have always been annoyed at the no-local-context thing in
channels;
> I wonder if there is a way of doing something like this well, so that
it's easy to write but also lets you scale later.
It's not obvious that sticky websockets are *necessarily* problematic for
typically use cases. Couple of things you'd want:
* Have clients be responsible for graceful reconn
Ah, I see, you are assuming sticky sockets. That makes things a lot easier
to architecture, but a whole lot harder to load-balance (you have to get
your load-balancer to deliberately close sockets when a server is
overloaded as many will not go away by themselves).
Still, it makes scaling down a l
> I note that your examples do not include "receiving messages from a
WebSocket and sending replies" - I would love to see how you propose to
tackle this given your current API, and I think it's the missing piece of
what I understand.
I've just added an `echo` WebSocket example.
I've also now
On Mon, Jun 12, 2017 at 10:53 PM, Tom Christie
wrote:
> > def handler(channel_layer, channel_name, message):
>
> Oh great! That's not a million miles away from what I'm working towards on
> my side.
> Are you planning to eventually introduce something like that as part of
> the ASGI spec?
>
I ha
> def handler(channel_layer, channel_name, message):
Oh great! That's not a million miles away from what I'm working towards on
my side.
Are you planning to eventually introduce something like that as part of the
ASGI spec?
> So is the channels object just a place to stuff different function
h
On Fri, Jun 9, 2017 at 8:22 PM, Tom Christie wrote:
> Figure I may as well show the sort of thing I'm thinking wrt. a more
> constrained consumer callable interface...
>
> * A callable, taking two arguments, 'message' & 'channels'
> * Message being JSON-serializable python primitives.
> * Channel
Figure I may as well show the sort of thing I'm thinking wrt. a more
constrained consumer callable interface...
* A callable, taking two arguments, 'message' & 'channels'
* Message being JSON-serializable python primitives.
* Channels being a dictionary of str:channel
* Channel instances expose `
On Thu, Jun 8, 2017 at 8:55 PM, Tom Christie wrote:
> > Any interface like this would literally just be "this function gets
> called with every event, but you can't listen for events on your own"
>
> Gotcha, yes. Although that wouldn't be the case with asyncio frameworks,
> since the channel read
> Any interface like this would literally just be "this function gets
called with every event, but you can't listen for events on your own"
Gotcha, yes. Although that wouldn't be the case with asyncio frameworks,
since the channel reader would be a coroutine.
Which makes for interesting design t
On Wed, Jun 7, 2017 at 7:05 PM, Tom Christie wrote:
> Making some more progress - https://github.com/tomchristie/uvicorn
> I'll look into adding streaming HTTP request bodies next, and then into
> adding a websocket protocol.
>
> I see that the consumer interface is part of the channels API refer
Making some more progress - https://github.com/tomchristie/uvicorn
I'll look into adding streaming HTTP request bodies next, and then into
adding a websocket protocol.
I see that the consumer interface is part of the channels API reference,
rather than part of the ASGI spec.
Is the plan to event
Right. I'll try and get a full async example up in channels-examples soon
to show off how this might work; I did introduce a Worker class into the
asgiref package last week as well, and the two things that you need to
override on that are "the list of channels to listen to" and "handle this
message
> I suspect there is potential for a very fast async-only layer that can
trigger the await that's hanging in a receive_async() directly from a
send() to a related channel, rather than sticking it onto a memory location
and waiting
Yup. Something that the gunicorn worker doesn't currently provid
Thanks for the continued speedy research, Tom!
Weighing in on the design of an ASGI-direct protocol, the main issue I've
had at this point is not HTTP (as there's a single request message and the
body stuff could be massaged in somehow), but WebSocket, where you have
separate "connect", "receive"
19 matches
Mail list logo