Hi Qi,

Was there any performance improvement work done in this quarter as you 
indicated a couple of months ago? If not, any plan? Thanks!

Jack


On Tuesday, August 30, 2016 at 1:56:07 PM UTC-4, Qi Zhao wrote:
>
> Jack,
>
> We believe there is ample space for performance to improve (because we 
> have not really invested time for that) and it is among our top priority 
> next quarter. But we have been very careful on all performance related work 
> (e.g., memory allocation, contention, etc.) since day 1. I am not sure what 
> environment you were running the benchmark but 0.82ms sounds insanely 
> longer than I expect. Can you share your benchmark including protobuf if 
> convenient? We have seen some fundamental errors when external ppl tried to 
> benchmark grpc.
>
> Regarding adding another transport, I am not sure what code you read. We 
> intend to encapsulate all the transport specific detail in the "transport" 
> package and "grpc" package should be transport-agnostic. If you saw some 
> http specific detail in the "grpc" package, please let us know.
>
>
> Craig,
>
> Given our dashboard shows the ping-pong latency of grpc-go is good and the 
> throughput is bad, I think the low hanging fruit would be reducing the 
> number of syscalls and contentions.
>
> On Tue, Aug 30, 2016 at 9:38 AM, 'Craig Tiller' via grpc.io <
> [email protected] <javascript:>> wrote:
>
>> Start with the question: "What would I change in HTTP/2 to improve QPS?"
>>
>> When we asked that question, we didn't really come up with anything 
>> interesting, and so we went with HTTP/2. (To be fair, there are some things 
>> we'd probably love to change in HTTP/2, but they're not about raw QPS... 
>> I'd love to see header frames included in flow control for instance).
>>
>> I suspect there's ample space in the Go implementation to improve. Our 
>> own benchmarks indicate that the C++ implementation is between 3 and 10x 
>> faster than the Go implementation, for example - and I don't believe 
>> there's much fundamental that's going on that prevents at least closing 
>> that gap.
>>
>> Questions that are interesting and merit answers:
>> - how much time is spent:
>>    - between making the API call to initiate an RPC and bytes being 
>> written to the wire client side
>>    - between reading bytes from the wire on the server and placing bytes 
>> back on the wire server side
>>    - between reading bytes from the wire client side and returning 
>> control to the application
>> - how much time is spent doing hpack? is a faster implementation 
>> interesting?
>> - how many locks, allocations, and context switches are performed for 
>> each unary request? (breaking them down into the three categories listed 
>> above would aid analysis)
>>
>> Armed with this data, we can start to determine where to spend time 
>> improving things.
>>
>> On Tue, Aug 30, 2016 at 9:22 AM <[email protected] <javascript:>> wrote:
>>
>>> Thanks Paul and Nicolas. 
>>>
>>> After a closer look at grpc-go, the implementation seems more coupled 
>>> with HTTP2.0 than I thought. For example, much of the code is "stream" 
>>> oriented. Plus, the protocol today is pretty much wired with HTTP2 [1]. I 
>>> start to wonder whether it really makes sense to implement a TCP transport 
>>> for gRPC. I would appreciate your thoughts on this.
>>>
>>> Jack
>>>
>>> [1] https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md
>>>
>>>
>>> On Saturday, August 27, 2016 at 10:57:19 PM UTC-4, Paul Grosu wrote:
>>>>
>>>>
>>>> Ah, thanks Nico - yeah, should have noticed that more carefully :)  The 
>>>> Go implementation is definitely easier to read.
>>>>
>>>> On Saturday, August 27, 2016 at 9:32:09 PM UTC-4, Nicolas Noble wrote:
>>>>>
>>>>> Unfortunately, Jack question was about the golang version of grpc, so 
>>>>> these don't apply here :-)
>>>>>
>>>>> We'll get someone from the grpc-go team to reply here. 
>>>>>
>>>>> On Sat, Aug 27, 2016, 17:17 Paul Grosu <[email protected]> wrote:
>>>>>
>>>>>>
>>>>>> Hi Jack,
>>>>>>
>>>>>> So I believe a lot of this stuff might not yet be documented - and I 
>>>>>> think Mark Roth is working on it - so you would be relying on my memory, 
>>>>>> which I hope you don't mind :)  This will be a fun team effort.  So you 
>>>>>> can 
>>>>>> start a TCP server via the grpc_tcp_server_start method as defined 
>>>>>> here:
>>>>>>
>>>>>>
>>>>>> https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_server_posix.c#L682-L722
>>>>>>
>>>>>> Keep in mind that you can create a TCP endpoint via the 
>>>>>> grpc_tcp_create method as defined here:
>>>>>>
>>>>>>
>>>>>> https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_posix.c#L467-L493
>>>>>>
>>>>>> One thing to note that the endpoint is defined via the vtable, which 
>>>>>> is a grpc_endpoint_vtable struct type, where you put references of 
>>>>>> your transport functions for reading, writing, etc. as such:
>>>>>>
>>>>>> static const grpc_endpoint_vtable vtable = {tcp_read,
>>>>>>                                             tcp_write,
>>>>>>                                             tcp_get_workqueue,
>>>>>>                                             tcp_add_to_pollset,
>>>>>>                                             tcp_add_to_pollset_set,
>>>>>>                                             tcp_shutdown,
>>>>>>                                             tcp_destroy,
>>>>>>                                             tcp_get_peer};
>>>>>>
>>>>>> The above is defined on the following lines:
>>>>>>
>>>>>>
>>>>>> https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_posix.c#L458-L465
>>>>>>
>>>>>> If you are unfamiliar with the endpoint definition it is in the 
>>>>>> following file:
>>>>>>
>>>>>>
>>>>>> https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/endpoint.h#L49-L62
>>>>>>
>>>>>> And looks like this:
>>>>>>
>>>>>> /* An endpoint caps a streaming channel between two communicating 
>>>>>> processes.
>>>>>>    Examples may be: a tcp socket, <stdin+stdout>, or some shared 
>>>>>> memory. */
>>>>>>
>>>>>> typedef struct grpc_endpoint grpc_endpoint;
>>>>>> typedef struct grpc_endpoint_vtable grpc_endpoint_vtable;
>>>>>>
>>>>>> struct grpc_endpoint_vtable {
>>>>>>   void (*read)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
>>>>>>                gpr_slice_buffer *slices, grpc_closure *cb);
>>>>>>   void (*write)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
>>>>>>                 gpr_slice_buffer *slices, grpc_closure *cb);
>>>>>>   grpc_workqueue *(*get_workqueue)(grpc_endpoint *ep);
>>>>>>   void (*add_to_pollset)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
>>>>>>                          grpc_pollset *pollset);
>>>>>>   void (*add_to_pollset_set)(grpc_exec_ctx *exec_ctx, grpc_endpoint 
>>>>>> *ep,
>>>>>>                              grpc_pollset_set *pollset);
>>>>>>   void (*shutdown)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep);
>>>>>>   void (*destroy)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep);
>>>>>>   char *(*get_peer)(grpc_endpoint *ep);
>>>>>> };
>>>>>>
>>>>>> So basically you can write your own transport functions if you 
>>>>>> prefer.  Again all of this was based on my reading of the code for some 
>>>>>> time, but if anyone thinks I misinterpreted something please correct me.
>>>>>>
>>>>>> Hope it helps,
>>>>>> Paul
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Friday, August 26, 2016 at 12:36:39 PM UTC-4, [email protected] 
>>>>>> wrote:
>>>>>>>
>>>>>>> Hi community,
>>>>>>>
>>>>>>> We've run some informal benchmark to compare gRPC-go's performance 
>>>>>>> with other alternatives. It's apparent that gRPC could improve in this 
>>>>>>> area. One option that we'd like to experiment is to see how much 
>>>>>>> performance gain we would get by using TCP instead of HTTP 2.0. I 
>>>>>>> understand that HTTP2.0 has been one of the core values of the gRPC 
>>>>>>> projects, but it would be interesting to understand its performance 
>>>>>>> implication and explore the TCP option that might fit well in many 
>>>>>>> scenarios. In order to do so, I'd like to get some advice on how to 
>>>>>>> replace 
>>>>>>> the transport layer. Is it sufficient to replace the implementation in 
>>>>>>> the 
>>>>>>> google.golang.org/grpc/transport/ package? Our initial experiment 
>>>>>>> indicates that some code in the call/invocation layer are coupled with 
>>>>>>> HTTP 
>>>>>>> 2.0 transport, e.g., the context remembers some HTTP 2.0 related 
>>>>>>> status. 
>>>>>>> Your advice on how to do a clean switch to TCP would be appreciated.
>>>>>>>
>>>>>>> Below is our benchmark data - please note it's an informal 
>>>>>>> benchmark, so just for people's casual reference.
>>>>>>>
>>>>>>> *Test method*: use three client machines, each making 10 
>>>>>>> connections to one server and then keep making a "hello" request in a 
>>>>>>> loop 
>>>>>>> (no new goroutines created); both the request and response contains a 
>>>>>>> 1K 
>>>>>>> sized message.
>>>>>>>
>>>>>>> *gRPC-go*
>>>>>>> Max TPS: 180K, Ave. latency: 0.82ms, server CPU: 67%
>>>>>>>
>>>>>>> *Go native RPC*
>>>>>>> Max TPS: 300K, Ave. latency: 0.26ms, server CPU: 37%
>>>>>>>
>>>>>>> *Apache Thrift with Go*
>>>>>>> Max TPS: 200K, Ave. latency: 0.29ms, server CPU: 21%
>>>>>>>
>>>>>>> Jack
>>>>>>>
>>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "grpc.io" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to [email protected].
>>>>>> To post to this group, send email to [email protected].
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/grpc-io/f14ec1d8-31e7-4f71-ac8a-907065e3bad4%40googlegroups.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/grpc-io/f14ec1d8-31e7-4f71-ac8a-907065e3bad4%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected] <javascript:>.
>>> To post to this group, send email to [email protected] 
>>> <javascript:>.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/ee3f361d-d8c8-4f79-a658-e8308bd6891d%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/grpc-io/ee3f361d-d8c8-4f79-a658-e8308bd6891d%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/CAAvp3oML_SdG-yhorRK%3DT8teTLtKMQeV7bJNO%2BEQsm%3DfYAX6Vg%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/CAAvp3oML_SdG-yhorRK%3DT8teTLtKMQeV7bJNO%2BEQsm%3DfYAX6Vg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Thanks,
> -Qi
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a3e84fa2-85b4-446b-9590-ba3d3f3299ad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to