[grpc-io] [grpc-go] How can I stat server-rt exactly in biz goroutine after Dedicated writing goroutine(#1498)

2017-09-17 Thread Zeymo Wang
RT -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscr...@googlegroups.com. To post to this group, send email to grpc-io@googlegroups.com. Visit this

[grpc-io] [gRPC-go] no trailer return when server timeout

2017-08-10 Thread Zeymo Wang
when server RecvMsg timeout ( maybe bad net condition) it will raise context deadlineExeeded and invoke t.WriteStatus,but wait will select s.ctx and return error without flush trailler to client. It's only way to remove stream in Map is receive client rst_stream. If I'm correct me so grpc-go

[grpc-io] [gRPC-go]use multi-addrConn to same endpoint

2017-06-16 Thread Zeymo Wang
here is gateway as gRPC-go client to proxy all request to gRPC server , In high concurrency or high QPS benchmark, addrConn soon reach the limitation of number of concurrent HTTP2 streams (e.g. 1000) though clientConn is multiplexed . Is any best practice about handle multi-addrConn to

[grpc-io] [grpc-go] why use independent goroutine keepalive rather then conn.setReadDeadline?

2017-06-08 Thread Zeymo Wang
Is any consider? -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscr...@googlegroups.com. To post to this group, send email to grpc-io@googlegroups.com.

[grpc-io] Re: Performance of customized-grpc-go as gateway

2017-05-09 Thread Zeymo Wang
maybe cpu periodic time high frequently relate to sync.Pool 's rumtime.futex when accqurie? -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to

[grpc-io] Performance of customized-grpc-go as gateway

2017-05-09 Thread Zeymo Wang
I customed grpc-go as gatway proxy which remove IDL and implements self-TLS1.3 with `cgo` for authenticor to proxy binary-grpc-call forwad to inner binary-RPC.In production I find some unusual phenomenon when profile KVM 4core + 8G + Intel Xeon E3-12xx v2 (Ivy Bridge) 2099Mhz connections

[grpc-io] Re: Go grpc syscall.write cause high cpu usage under not much heavy load

2017-01-19 Thread Zeymo Wang
version go 1.7 On Thursday, January 19, 2017 at 5:25:36 PM UTC+8, Zeymo Wang wrote: > > I fork grpc-go uses as gateway just enioy h2c benifit (also remove pb IDL > feature),which I implement 0-RTT TLS( cgo invoke libsodium) repalce the > standard TLS and handle request just do