Hi Noah- The entire output of the run looked like this:
21:46:52.966 - main() starting
21:46:52.969 - Channel Up- Testing Stub
21:46:52.969 - Making RPC
21:46:53.071 - Leaving testStub()
21:46:53.073 - Leaving test()
D0530 21:46:54.07800 17504 iomgr.c:101] Waiting for 1 iomgr obje
Some education please on the NettyChannelBuilder vs ManagedChannelBuilder.
Code I inherited used ManagedChannelBuilder for grpc (non-TLS) and
NettyChannelBuilder for grpcs (TLS). So NettyChannelBuilder is a subclass
of ManagedChannelBuilder with more options. Is there any reason to not
just
I Just noticed that NettyChannelBuilder has a keep alive option.
On Tuesday, May 30, 2017 at 8:21:32 PM UTC-4, cr2...@gmail.com wrote:
>
> Thanks that's really great information. How about Java ? Does Java do
> this too ?
>
> On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote:
>
Also, for completeness: the client would use the DNS name of the *balancer*
when
creating a channel. Note also that your balancer DNS name may point to
several addresses. If one or more of them are marked as balancers, grpclb
will be used. Which balancer will be picked up if there's more than o
Hi Tudor,
Apologies for the late response. It's timely that the PR that adds support
for the c-ares DNS resolver (https://github.com/grpc/grpc/pull/11237) is
very close to being merged. This is the piece that's missing for making
grpclb work in open source. In particular:
- Have a look at t
Thanks that's really great information. How about Java ? Does Java do
this too ?
On Tuesday, May 30, 2017 at 8:01:37 PM UTC-4, Michael Lumish wrote:
>
> For the Node client, I think you can enable keepalive HTTP2 pings using
> the channel argument "grpc.keepalive_time_ms" as defined at
> http
The document should be updated to
mention https://github.com/grpc/grpc/pull/11237, perhaps once it's been
merged.
On Tuesday, 31 January 2017 10:30:04 UTC-8, Mark D. Roth wrote:
>
> I've put together the following gRFC for encoding grpclb data in DNS:
>
> https://github.com/grpc/proposal/pull/10
On Tue, May 23, 2017 at 1:51 AM, wrote:
> BUILD FROM SOURCE
>
> You only need to go through these steps if you are planning to develop
> gRPC C#. If you are a user of gRPC C#, go to Usage section above.
>
> Windows, Linux or Mac OS X
>
>-
>
>The easiest way to build is using the run_tests
For the Node client, I think you can enable keepalive HTTP2 pings using the
channel argument "grpc.keepalive_time_ms" as defined at
https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/grpc_types.h#L230.
This argument is passed in the options object, which is the third argument
to the
Currently Node client is looking to fix this doing a heartbeat at the
application level. Sending NOP messages This seem like something
grpc/http2 should handle. Load balancers are very common.
Don't this is off the bat a good way to fix this if it is even an issue for
Java
--
You received t
On Tuesday, May 30, 2017 at 7:18:39 PM UTC-4, cr2...@gmail.com wrote:
>
> For streaming api have there been issues reported where the is no activity
> that load balancers would close connections?
> For Node there is this being reported
> https://jira.hyperledger.org/browse/FAB-2787
> However wh
If not in idle mode can or does grpc implement some sort of keep alive ?
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to grpc-io+unsubscr...@googlegroups.com.
To pos
For streaming api have there been issues reported where the is no activity
that load balancers would close connections?
For Node there is this being reported
https://jira.hyperledger.org/browse/FAB-2787
However when I read the Java spec
http://www.grpc.io/grpc-java/javadoc/io/grpc/ManagedChanne
My guess is that you have an older copy of protobuf installed on the system
somewhere. Can you manually search and remove older protobuf libraries?
On Sun, May 28, 2017 at 12:09 PM, Bugsfunny
wrote:
> Plz help me with this..
> On Wednesday, May 24, 2017 at 12:44:25 PM UTC+5:30, Bugsfunny wrote:
So the reason that ten second deadline exist is to allow for garbage
collectors (for the wrapped languages) to do some needed clean up. It does
not make must sense to shorten that or make it configurable. Since it
happens at shutdown time, it is ok to take a little extra time.
There is an environm
Hi Noah-
The GRPC_VERBOSITY=DEBUG produced this:
*D0530 08:45:57.04800 17740 iomgr.c:101] Waiting for 1 iomgr objects to
be destroyed*
ten times, one per second. So it would seem that your hunch is correct, but
I really don't know what to do now. The GRPC_TRACE=all didn't produce any
outp
Could you turn on debugging and attach the output? I have a hunch that this
comes from grpc_iomgr_shutdown. It will wait ten seconds to try to free all
iomgr object before giving up and leaking memory.
export GRPC_VERBOSITY=DEBUG for minimal debugging. That should be enough,
but for even more trac
Late to the party, but ... I have a need for this, so ...
On Friday, October 7, 2016 at 6:31:33 AM UTC+11, Nathaniel Manista wrote:
>
> On Sun, Sep 25, 2016 at 8:27 AM, >
> wrote:
>
>> Nathaniel: That looks like an example of a method implementation.
>>
>
> I think I'm so used to seeing the messa
18 matches
Mail list logo