Hi Jack,

So I believe a lot of this stuff might not yet be documented - and I think 
Mark Roth is working on it - so you would be relying on my memory, which I 
hope you don't mind :)  This will be a fun team effort.  So you can start a 
TCP server via the grpc_tcp_server_start method as defined here:

https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_server_posix.c#L682-L722

Keep in mind that you can create a TCP endpoint via the grpc_tcp_create 
method as defined here:

https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_posix.c#L467-L493

One thing to note that the endpoint is defined via the vtable, which is a 
grpc_endpoint_vtable struct type, where you put references of your 
transport functions for reading, writing, etc. as such:

static const grpc_endpoint_vtable vtable = {tcp_read,
                                            tcp_write,
                                            tcp_get_workqueue,
                                            tcp_add_to_pollset,
                                            tcp_add_to_pollset_set,
                                            tcp_shutdown,
                                            tcp_destroy,
                                            tcp_get_peer};

The above is defined on the following lines:

https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_posix.c#L458-L465

If you are unfamiliar with the endpoint definition it is in the following 
file:

https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/endpoint.h#L49-L62

And looks like this:

/* An endpoint caps a streaming channel between two communicating processes.
   Examples may be: a tcp socket, <stdin+stdout>, or some shared memory. */

typedef struct grpc_endpoint grpc_endpoint;
typedef struct grpc_endpoint_vtable grpc_endpoint_vtable;

struct grpc_endpoint_vtable {
  void (*read)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
               gpr_slice_buffer *slices, grpc_closure *cb);
  void (*write)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
                gpr_slice_buffer *slices, grpc_closure *cb);
  grpc_workqueue *(*get_workqueue)(grpc_endpoint *ep);
  void (*add_to_pollset)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
                         grpc_pollset *pollset);
  void (*add_to_pollset_set)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep,
                             grpc_pollset_set *pollset);
  void (*shutdown)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep);
  void (*destroy)(grpc_exec_ctx *exec_ctx, grpc_endpoint *ep);
  char *(*get_peer)(grpc_endpoint *ep);
};

So basically you can write your own transport functions if you prefer.  
Again all of this was based on my reading of the code for some time, but if 
anyone thinks I misinterpreted something please correct me.

Hope it helps,
Paul


On Friday, August 26, 2016 at 12:36:39 PM UTC-4, [email protected] wrote:
>
> Hi community,
>
> We've run some informal benchmark to compare gRPC-go's performance with 
> other alternatives. It's apparent that gRPC could improve in this area. One 
> option that we'd like to experiment is to see how much performance gain we 
> would get by using TCP instead of HTTP 2.0. I understand that HTTP2.0 has 
> been one of the core values of the gRPC projects, but it would be 
> interesting to understand its performance implication and explore the TCP 
> option that might fit well in many scenarios. In order to do so, I'd like 
> to get some advice on how to replace the transport layer. Is it sufficient 
> to replace the implementation in the google.golang.org/grpc/transport/ 
> package? Our initial experiment indicates that some code in the 
> call/invocation layer are coupled with HTTP 2.0 transport, e.g., the 
> context remembers some HTTP 2.0 related status. Your advice on how to do a 
> clean switch to TCP would be appreciated.
>
> Below is our benchmark data - please note it's an informal benchmark, so 
> just for people's casual reference.
>
> *Test method*: use three client machines, each making 10 connections to 
> one server and then keep making a "hello" request in a loop (no new 
> goroutines created); both the request and response contains a 1K sized 
> message.
>
> *gRPC-go*
> Max TPS: 180K, Ave. latency: 0.82ms, server CPU: 67%
>
> *Go native RPC*
> Max TPS: 300K, Ave. latency: 0.26ms, server CPU: 37%
>
> *Apache Thrift with Go*
> Max TPS: 200K, Ave. latency: 0.29ms, server CPU: 21%
>
> Jack
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f14ec1d8-31e7-4f71-ac8a-907065e3bad4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to