Hi James:

Thanks for your response and the information in your (and other)
blog(s).  I haven't had time to learn all the new features of jdk9 yet
so a look at Flow was interesting.

My first impression is that a time constraint would be better than
destroying asynchrony.  Rather than telling the source to only send
one message and thus effectively make it a synchronous process, tell
the source to only send one message per 100 millis or 1000 millis or
whatever is appropriate back-pressure while my other threads are doing
their thing ??  When the WebSocket.Listener has a complete message
trigger a notification event to registered listeners.  And if the
programmer wants to process intermediate results that would not be too
difficult for the listener to track.

I can understand the potential need for "back-pressure" but I think
conserving the asynchronous nature of WebSocket is a high priority as
well.  Indeed, I've built my application on that feature of
WebSockets.

Thanks again.  At least I'm a bit more enlightened about the issue
being addressed.

Chuck





On Fri, Feb 9, 2018 at 11:38 PM, James Roper <ja...@lightbend.com> wrote:
> Hi Chuck,
>
> Presumably this API is similar in intention to the request method used in
> reactive streams (aka java.util.concurrent.Flow), that is, request is the
> means by which backpressure is propagated. One major problem with JDK8
> WebSockets is there's no way to asynchronously propagate backpressure, you
> have to accept every message that comes as it comes, you can't tell the
> other end to back off, which means if it's producing messages faster than
> you can consume them, your only two options are to fail fast, or risk
> running out of memory.  Reactive Streams solves this by requiring consumers
> to signal demand for data/messages before they receive any - an invocation
> of the request method is just saying how many more elements the consumer is
> currently ready to receive, and can be invoked many times, as the consumer
> processes messages and is ready to receive more. Generally, for Reactive
> Streams, application developers are not expected to implement or invoke
> these APIs directly, instead, they are expected to use reactive streams
> implementations like Akka Streams, rxJava or Reactor, which efficiently
> manage buffering and keeping the buffer at an appropriate level for the
> application developer, so the application developer can just focus on their
> business concerns.
>
> But that brings me to a problem that I'd like to give as feedback to the
> implementers - this API is not Reactive Streams, and so therefore can't take
> advantage of Reactive Streams implementations, and more problematic, can't
> interop with other Reactive Streams sinks/sources. If I wanted to stream a
> WebSocket into a message broker that supports Reactive Streams, I can't. I
> would definitely hope that Reactive Streams support could be added to this
> API, at a minimum as a wrapper, so that application developers can easily
> focus on their business problems, plumbing and transforming messages from
> one place to another, rather than having to deal with implementing
> concurrent code to pass messages. It may well require wrapping messages in a
> high level object - text, binary, ping, pong, etc, to differentiate between
> the message types.
>
> For a broader context of how this would fit into the broader Reactive
> Streams ecosystem, I've published a blog post of what it would look like if
> Java EE/EE4J were to adopt Reactive Streams everywhere, as it happens this
> also includes proposals for using Reactive Streams in the JSR 356 WebSocket
> spec:
>
> https://developer.lightbend.com/blog/2018-02-06-reactive-streams-ee4j/index.html
>
> Regards,
>
> James
>
> On 9 February 2018 at 19:16, Chuck Davis <cjgun...@gmail.com> wrote:
>>
>> I've been using jdk8 websockets to develop my desktop java
>> applications.  Now that jdk9 is on my machine I started looking at
>> websockets and I'm not at all sure I like what I see.  Can someone
>> familiar with this feature please explain the rationale for what is
>> happening?
>>
>> I'm concerned, at this initial stage, primarily by
>> WebSocket.request(long).  This "feature" seems to have at least two
>> very negative impacts:
>>
>> 1)  It appears to destroy the asynchronous nature of websockets;
>> 2)  It appears to place programmers in the impossible position of
>> guessing how many messages the server side might send.
>>
>> 1)  If everything has to stop while the client asks for more messages
>> asynchronous communication is nullified.  The jdk8 implementation is,
>> therefore, much more functional.
>>
>> 2)  It would appear the only logical use of WebSocket.request() would
>> be to use a long value of Long.MAX_VALUE since there is no way to know
>> how many messages may be received.  And what if the programmer asks
>> for 1 message and the next message has 3 parts.  We're screwed.
>> Additionally, the documentation specifically states that the
>> WebSocket.Listener does not distinguish between partial and whole
>> messages.  The jdk8 implementation decoders/encoders accumulate
>> messages and assemble them until the message is complete and then pass
>> it to the Endpoint -- much more satisfactory arrangement.
>>
>> I cannot fathom the meaning or improvement of this new wrinkle.
>>
>> Thanks for any insight
>
>
>
>
> --
> James Roper
> Senior Octonaut
>
> Lightbend – Build reactive apps!
> Twitter: @jroper

Reply via email to