en dealing with large numbers of small messages in a
> stream.
>
> One thing I notice between your two APIs is that in the more structure one
> you send the column name for every column value which seems quite
> inefficient
>
> On Tue, Oct 4, 2016 at 1:31 AM, Avinash Dongre <d
I have client which make a request to server and Server in reply sends a
Stream-reply, This works fine for small number of message but when the
number of messages are large then My client is shutdown before I could
receive all the message.
Following is how I have implemented Server RPC
Thanks Paul,
My mistake on coding side.
Issue is fixed is now.
Thanks
Avinash
On Tuesday, September 20, 2016 at 10:07:54 PM UTC+5:30, Avinash Dongre
wrote:
>
> Thanks Paul,
> I changed the code but still I am getting the same Exception.
>
> Thanks
> Avinash
>
>
> On
server from the command line.
>*/
> public static void main(String[] args) throws IOException,
> InterruptedException {
> final HelloWorldServer server = new HelloWorldServer();
> server.start();
> server.blockUntilShutdown();
> }
>
>
> On T
I am getting following exception on Server
Server started, listening on 50051
Sep 20, 2016 7:24:07 PM io.grpc.netty.NettyServerHandler onStreamError
WARNING: Stream Error
io.netty.handler.codec.http2.Http2Exception$StreamException: Stream closed
before write could take place
at
Hi All,
Please help.
Thanks
Avinash
On Saturday, September 24, 2016 at 12:01:02 PM UTC+5:30, Avinash Dongre
wrote:
>
> >>> Now I get around 130-135 MegaBytes/Seconds Speed.
>
> This result is on the Same Machine. i.e. gRPC Client and gRPC Servers are
> run
; performance
>
> -louis (from phone)
>
> On Sep 26, 2016 3:57 AM, "Avinash Dongre" <dongre@gmail.com
> > wrote:
>
> Hi All,
> Please help.
>
> Thanks
> Avinash
>
>
> On Saturday, September 24, 2016 at 12:01:02 PM UTC+5:30, Avinash
I was trying out "flow control" see the example here
https://github.com/davinash/grpc-bench/blob/master/bench/src/main/java/io/adongre/grpc/formatted/ScanFormattedServiceImpl.java
Thanks
Avinash
On Sunday, October 23, 2016 at 8:51:35 PM UTC+5:30, Matt Mitchell wrote:
>
> After digging a