That is nice.
Thank you for the reply Koen.


On Tue, Aug 9, 2016 at 12:56 PM, Koen De Keyser <[email protected]>
wrote:

> I have measured an average round trip latency of 320us (so A sends message
> to B, and B then sends a message back to A). This was using streaming RPC
> (C++ implementation), using 2 machines (Xeon E5) connected over 40 Gbps
> ethernet. This does include some additional logic on both sides, but
> nothing substantial, so pure gRPC latency might actually be slightly lower.
>
> Koen
>
> On Tuesday, 9 August 2016 19:33:31 UTC+2, Pradeep Singh wrote:
>>
>> Oh I was running the included benchmark in gRPC src code.
>> I think it reuses the same connection.
>>
>> 300us sounds really good.
>>
>> What latency do you guys notice when client and server are running on
>> different hosts?
>>
>> Thanks,
>>
>> On Tue, Aug 9, 2016 at 8:58 AM, Eric Anderson <[email protected]> wrote:
>>
>>> On Mon, Aug 8, 2016 at 12:35 AM, <[email protected]> wrote:
>>>
>>>> With custom zmq messaging bus we get latency in order of microseconds
>>>> between 2 services on same host (21 us avg) vs 2 ms avg for gRPC.
>>>>
>>>
>>> Did you reuse the ClientConn between RPCs?
>>>
>>> In our performance tests on GCE (using not very special machines, where
>>> netperf takes ~100µs) we see ~300µs latency for unary and ~225µs latency
>>> for streaming in Go.
>>>
>>
>>
>>
>> --
>> Pradeep Singh
>>
>


-- 
Pradeep Singh

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAPpR%3DvV1_AG0KrgWKcEB8%2BDj7Q1xBoPTXYNihiw%2Bt%2BDZ3GGVkw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to