Would you please provide more information about how you free the service?
I'm also not sure about how grpc gc works, maybe somebody from grpc team
could give better answer.
PS: For manually free arena, you could use
https://cs.corp.google.com/piper///depot/google3/net/proto2/public/arena.h?q=ar
Would you please provide more information about how you gc the service? The
response should be freed before the service get freed.
PS: For manually free arena, you could
use
https://cs.corp.google.com/piper///depot/google3/net/proto2/public/arena.h?q=arena.h&dr=CSs&l=351
On Wednesday, January
Hey then what is the use of grpc.keepalive_time_ms?
On Thursday, November 16, 2017 at 10:06:44 AM UTC-8, John Hume wrote:
>
> We're trying to keep connections alive between Ruby grpc clients and a
> grpc-java server between unary rpc calls, which sometimes come minutes
> apart. Based on
> https
HI,
I am new to GRPC and using GRPC C++ for my project. I am using streaming
RPC where client will stream the messages it has and server listens to the
messages. When streaming is in progress from client side (lets say after
couple of messages sent from client end), I restart server (Ctrl+c and
Mark: I would be interested in taking over this, assuming you don't have
many more concerns.
I have two changes I would like to propose as well:
1. Add a CT_DEBUG level between CT_UNKNOWN and CT_INFO. ChannelTraces
at the DEBUG level are *never* surfaced up to channelz, and implementation
> What I meant by subsequently created channel is: does every call for
> CreateChannel/CreateCustomChannel perform an explicit DNS lookup, or
> subsequent call may reuse a cached result of successful resolution from
> the previous one?
>
Each channel does its own DNS lookup. Granted, the DNS loo
That's correct but note that xDS spec already exists as linked above and it
is not something new that gRPC team is proposing.
On Saturday, March 2, 2019 at 8:44:45 PM UTC-8, Rama Rao wrote:
>
> Srini,
>
> Does it mean that if control plane implements the new xDS api that gRPC
> team is going pro
Yes, you can write your own interceptors to perform retries, although they
won't have quite the same functionality as the built-in implementation
will. For example, there's no way to guarantee from an interceptor that
each attempt is routed to a different server when doing client-side load
balanci
The problem is that only grpc team members can run custom queries on the
dashboard (that's for security purposes).
The data itself is stored in bigquery, perhaps we could export it somehow.
On Monday, March 4, 2019 at 10:19:09 PM UTC+1, mrus...@gmail.com wrote:
>
> Hi GRPC Experts,
>
>
>
> I’m