Is there a timeline or general ETA on that?
Thanks for the reply!
On Wed, Sep 28, 2016 at 4:04 PM, Qi Zhao wrote:
> My speculation is that the longer latency is from contention. We are
> working on performance optimization now and will keep you posted on the
> improvement.
>
This was resolved in https://github.com/grpc/grpc-java/issues/2300. There
was an older protobuf getting included in the application, which was
causing trouble.
On Monday, September 26, 2016 at 8:51:04 AM UTC-7, Smallufo Huang wrote:
>
> Hi , I am new to gRPC / protobuf.
> This is my first
What I have noticed is that it is directly the number of concurrent
requests I am sending over the given connection at any time e.g. if I am
using 2000 connections to send simulated load on a single connection I see
crazy high latency - however if I (for testing purpose) send those across a
On Mon, Sep 26, 2016 at 1:10 AM, KaterchenSama wrote:
> When I gave a watchRequest to the watch function (instead of an
> iterator/stream of watchRequests) the very same sanity check didn't
> trigger. Assuming that it would be supposed to, should I open up a bug
>
This works
Thanks
On Wednesday, September 28, 2016 at 1:29:41 AM UTC+2, Michael Rose wrote:
>
> You can access it via the Context:
>
> Deadline deadline = Context.current().getDeadline();
>
> Keep in mind, Context is thread local, so if you dispatch to another
> thread you'll need to ensure that
I am looking for a way to enforce a limit on the size of the incoming
requests (uncompressed). I have tried setting maxMessageSize on
NettyServerBuilder, but it did not seem to work. After some debugging and
searching, I found that this limit is only enforced in MessageDeframer, and
only for