Hi Chuck,

Let's just imagine that for a moment that there existed a reactive streams
equivalent to an ObjectReader (there isn't, and there's good reason why
there isn't, but I'll get to that later). Then, the code would be something
like this (the Source API there is an Akka Streams like API):

HttpRequest.create(URI.create("..."))
  .GET()
  .response((status, headers) ->
      BodySubscriber.from(
        Source.asPublisher()
          .via(ReactiveStreamsObjectReader.create())
          .forEach(object -> {
             // do what you want with each object as its passed here
          })
      )
  );

Now that actually looks like less, and simpler, code than your
implementation - the above is the full code for the example, you only
included code for handling the body. You can also do a lot more than that,
you declaratively define how you process your objects, create complex
graphs feeding them to other locations, etc. I'd suggest you read the docs
on Reactive Streams implementations, because one of the goals of the JDK9
client is to be asynchronous, and being asynchronous means you do have to
turn your processing on its heaḋ, you don't pull (ie, you don't invoke
readObject and get something back), rather, you are pushed things, so you
define stages in your processing, and provide callbacks when you want to do
custom things. Here's the docs for Akka streams to get acquainted with an
asynchronous streaming view of the world:

https://doc.akka.io/docs/akka/2.5/stream/index.html

Now, as I said, no reactive streams implementation (that I'm aware of)
provides Java serialization support, and for good reason. It's 2018, Java
serialization has been shown, time and time again, to have major security
flaws in it. If you make an HTTP request on another application, and parse
what it gives back to you using ObjectInputStream, you are opening a quite
trivial way for that application to execute code on your application. Even
if you trust the remote system, it goes against the security best practice
of bulkheading - ensuring that if one application is compromised, the
entire system isn't compromised. No one wants to provide support these days
for such insecure practices.

Here's a good summary of why you should never use Java serialization:

https://www.christian-schneider.net/JavaDeserializationSecurityFAQ.html

Regards,

James

On 18 February 2018 at 20:12, Chuck Davis <cjgun...@gmail.com> wrote:

> There is a great hush here regarding strategy for putting the pieces
> back together so here goes my tirade.
>
> Following is based on assumption of a 100k payload.
>
> With jdk8:
> The decoder received a ByteBuffer
> bais = new ByteArrayInputStream(ByteBuffer.array());  // wrap the
> buffer -- optional method but implemented in Oracle JVM
> ois = new ObjectInputStream(bais)  // wrap the input stream
> MyDTO = ois.readObject();   // deserialize the object (which may
> contain hundreds or thousands of  objects)
>
> DONE!  Simple, clean and blazingly fast.
>
> With jdk9:
> The Listener receives a ByteBuffer (I read someplace the default size
> is potentially going to be 256 bytes)
> If it's the first part (or whole) create a byte[] of 10k (or whatever)
> Start copying bytes from the buffer to the array
> Check each byte to be sure there is room in the array for another byte
>     if room, copy the byte
>     if no room, copy the array to a larger array (current size + 10k)
> and then copy the byte
>     repeat until all messages are processed (i.e. each byte will have
> been copied somewhere between 1 and 10 times)
> bais = new ByteArrayInputStrea(byte[]);
> ois = new ObjectInputStream(bais);
> MyDTO = ois.readObject();
>
> DONE!  More complicated, messy and very slow.  (i.e. lots of wasted cpu
> cycles)
>
> RESULT:  The formerly fabulous WebSocket has been rendered relatively
> useless as a platform for building responsive, bidirectional
> client/server applications.  Am I the only person who sees this as a
> HUGE regression of functionality??  I am ALARMED by this turn of
> events.
>
> OPINION:
>
> I'm pretty naive about the world of midrange and mainframe except that
> they can throw a lot of bytes in a short time.  But I imagine the
> choke-point of any desktop application is going to be the network.
> Unless somebody is running a Pentium I doubt any relatively modern
> desktop is going to have a difficult time keeping up with networking
> speeds.  It seems to me, therefore, that backpressure in WebSocket is
> a solution looking for a problem.  If backpressure is somehow
> essential in WebSocket include the possibility but please don't
> destroy the best client/server communication in the process.  If
> necessary, create two implementations:  WebSocketFast and
> WebSocketSlow.
>
> I'll go back to my cave now and pout about what has happened to my
> once fabulous WebSocket.
>



-- 
*James Roper*
*Senior Octonaut*

Lightbend <https://www.lightbend.com/> – Build reactive apps!
Twitter: @jroper <https://twitter.com/jroper>

Reply via email to