Hi I'm seeing the above error on one system that is much slower. On other
system the exact same setup works fine.
This is on a ubuntu system using 1.2 grpc .. the client see that error
going to server running in docker.
The scenario is exactly the same on both machines running through
Also any advise on good practices to make connections appear more resilient
to the layers up the stack?
On Friday, August 4, 2017 at 10:44:54 AM UTC-4, cr2...@gmail.com wrote:
> I have code that's using the futureStub and using NettyChannelBuilder with
> no other properties set other
I have code that's using the futureStub and using NettyChannelBuilder with
no other properties set other than usePlaintext(true); I have users that
are claiming everything is working fine except if there's no activity on
that connection for about 20 minutes. Then they see:
Hi Thanks for the reply- I was under the initial assumption that separate
streaming stubs would create separate connections .. but the connection is
on the managed channel so at the moment I've moved to just creating one per
On Monday, June 19, 2017 at 2:44:39 PM UTC-4,
On Tuesday, May 30, 2017 at 7:18:39 PM UTC-4, cr2...@gmail.com wrote:
> For streaming api have there been issues reported where the is no activity
> that load balancers would close connections?
> For Node there is this being reported
For streaming api have there been issues reported where the is no activity
that load balancers would close connections?
For Node there is this being reported
However when I read the Java spec
Thanks that's really great information. How about Java ? Does Java do
this too ?
On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote:
> For the Node client, I think you can enable keepalive HTTP2 pings using
> the channel argument "grpc.keepalive_time_ms" as defined at
I Just noticed that NettyChannelBuilder has a keep alive option.
On Tuesday, May 30, 2017 at 8:21:32 PM UTC-4, cr2...@gmail.com wrote:
> Thanks that's really great information. How about Java ? Does Java do
> this too ?
> On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote:
Currently Node client is looking to fix this doing a heartbeat at the
application level. Sending NOP messages This seem like something
grpc/http2 should handle. Load balancers are very common.
Don't this is off the bat a good way to fix this if it is even an issue for
Some education please on the NettyChannelBuilder vs ManagedChannelBuilder.
Code I inherited used ManagedChannelBuilder for grpc (non-TLS) and
NettyChannelBuilder for grpcs (TLS). So NettyChannelBuilder is a subclass
of ManagedChannelBuilder with more options. Is there any reason to not
I set on a grpc stream with NettyChannel options:
At times in the code I've added sleep for 15 min. I see on Wireshark the
But after a time I see :
Thanks makes sense. I was setting this to 1 sec at one time :) ... not
production but just *playing* around and I think it was working without a
hitch. However I think they upgraded the server grpc and now im getting
this. If this is all that's going on no worries then. Like I said oddly
Seems to me that there should be an autoback off. Server warning don't
call back again for 4 min . Well behaved clients follow that, bad ones get
On Monday, June 5, 2017 at 7:17:30 PM UTC-4, Eric Anderson wrote:
> In 1.3 we started allowing clients to be more aggressive. From
Is there a means for the client to obtain the certificate bytes from the
TLS negotiation that was sent by the server ? The reason I ask, to avoid a
replay security scenario the idea is for the client to hash this and send
it back with requests. I honestly don't know the details of this but
I know of maxInboundMessageSize for incoming messages to a client Are
there any controls/restrictions on outbound ?
I also know the server can also restrict on how much it will receive
(it's inbound) but not really interested in that, just client side
Nothing -- I don't ever try to set the bar that high ;)
I seen a Node implementation was setting something on outbound and was
worried I was missing something on the Java. Making sure there was no
limit for outbound which just fine for me. :)
On Wednesday, November 8, 2017 at 5:09:10 PM
Mail list logo