[grpc-io] cause=io.netty.handler.codec.http2.Http2Exception: Header size exceeded max allowed size (10240)

2017-04-20 Thread cr22rc
Hi I'm seeing the above error on one system that is much slower. On other system the exact same setup works fine. This is on a ubuntu system using 1.2 grpc .. the client see that error going to server running in docker. Java 1.8 The scenario is exactly the same on both machines running through

[grpc-io] Re: [Java] gRPC failure Connection reset by peer with inactivity.

2017-08-04 Thread cr22rc
Also any advise on good practices to make connections appear more resilient to the layers up the stack? On Friday, August 4, 2017 at 10:44:54 AM UTC-4, cr2...@gmail.com wrote: > > Hi, > I have code that's using the futureStub and using NettyChannelBuilder with > no other properties set other

[grpc-io] [Java] gRPC failure Connection reset by peer with inactivity.

2017-08-04 Thread cr22rc
Hi, I have code that's using the futureStub and using NettyChannelBuilder with no other properties set other than usePlaintext(true); I have users that are claiming everything is working fine except if there's no activity on that connection for about 20 minutes. Then they see: gRPC

[grpc-io] Re: Java client streaming api questions

2017-06-19 Thread cr22rc
Hi Thanks for the reply- I was under the initial assumption that separate streaming stubs would create separate connections .. but the connection is on the managed channel so at the moment I've moved to just creating one per thread call. On Monday, June 19, 2017 at 2:44:39 PM UTC-4,

[grpc-io] Re: grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
On Tuesday, May 30, 2017 at 7:18:39 PM UTC-4, cr2...@gmail.com wrote: > > For streaming api have there been issues reported where the is no activity > that load balancers would close connections? > For Node there is this being reported > https://jira.hyperledger.org/browse/FAB-2787 > However

[grpc-io] grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
For streaming api have there been issues reported where the is no activity that load balancers would close connections? For Node there is this being reported https://jira.hyperledger.org/browse/FAB-2787 However when I read the Java spec

Re: [grpc-io] Re: grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
Thanks that's really great information. How about Java ? Does Java do this too ? On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote: > > For the Node client, I think you can enable keepalive HTTP2 pings using > the channel argument "grpc.keepalive_time_ms" as defined at >

Re: [grpc-io] Re: grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
I Just noticed that NettyChannelBuilder has a keep alive option. On Tuesday, May 30, 2017 at 8:21:32 PM UTC-4, cr2...@gmail.com wrote: > > Thanks that's really great information. How about Java ? Does Java do > this too ? > > On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote:

[grpc-io] Re: grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
Currently Node client is looking to fix this doing a heartbeat at the application level. Sending NOP messages This seem like something grpc/http2 should handle. Load balancers are very common. Don't this is off the bat a good way to fix this if it is even an issue for Java -- You received

Re: [grpc-io] Re: grpc streaming and loadbalancer close connections?

2017-05-30 Thread cr22rc
Some education please on the NettyChannelBuilder vs ManagedChannelBuilder. Code I inherited used ManagedChannelBuilder for grpc (non-TLS) and NettyChannelBuilder for grpcs (TLS). So NettyChannelBuilder is a subclass of ManagedChannelBuilder with more options. Is there any reason to not

[grpc-io] java keep alive WARNING: Received GOAWAY with ENHANCE_YOUR_CALM

2017-06-05 Thread cr22rc
Hi I set on a grpc stream with NettyChannel options: keepAliveTime"(60L, TimeUnit.SECONDS}); keepAliveTimeout{8L, TimeUnit.SECONDS}); At times in the code I've added sleep for 15 min. I see on Wireshark the keep alives. But after a time I see :

[grpc-io] Re: java keep alive WARNING: Received GOAWAY with ENHANCE_YOUR_CALM

2017-06-05 Thread cr22rc
Thanks makes sense. I was setting this to 1 sec at one time :) ... not production but just *playing* around and I think it was working without a hitch. However I think they upgraded the server grpc and now im getting this. If this is all that's going on no worries then. Like I said oddly

Re: [grpc-io] java keep alive WARNING: Received GOAWAY with ENHANCE_YOUR_CALM

2017-06-05 Thread cr22rc
Seems to me that there should be an autoback off. Server warning don't call back again for 4 min . Well behaved clients follow that, bad ones get disconnected. On Monday, June 5, 2017 at 7:17:30 PM UTC-4, Eric Anderson wrote: > > In 1.3 we started allowing clients to be more aggressive. From

[grpc-io] Java client get server TLS certificate from a StreamObserver connection.

2017-09-05 Thread cr22rc
Hi Is there a means for the client to obtain the certificate bytes from the TLS negotiation that was sent by the server ? The reason I ask, to avoid a replay security scenario the idea is for the client to hash this and send it back with requests. I honestly don't know the details of this but

[grpc-io] grpc Java client Outbound messages size limitations ?

2017-11-08 Thread cr22rc
Hi I know of maxInboundMessageSize for incoming messages to a client Are there any controls/restrictions on outbound ? I also know the server can also restrict on how much it will receive (it's inbound) but not really interested in that, just client side configurations/restrictions. Thanks

Re: [grpc-io] Re: grpc Java client Outbound messages size limitations ?

2017-11-08 Thread cr22rc
Nothing -- I don't ever try to set the bar that high ;) I seen a Node implementation was setting something on outbound and was worried I was missing something on the Java. Making sure there was no limit for outbound which just fine for me. :) On Wednesday, November 8, 2017 at 5:09:10 PM