RT
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this
when server RecvMsg timeout ( maybe bad net condition) it will raise
context deadlineExeeded and invoke t.WriteStatus,but wait will select s.ctx
and return error without flush trailler to client. It's only way to remove
stream in Map is receive client rst_stream. If I'm correct me
so grpc-go
here is gateway as gRPC-go client to proxy all request to gRPC server , In
high concurrency or high QPS benchmark, addrConn soon reach the limitation
of number of concurrent HTTP2 streams (e.g. 1000) though clientConn is
multiplexed . Is any best practice about handle multi-addrConn to
Is any consider?
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
maybe cpu periodic time high frequently relate to sync.Pool 's
rumtime.futex when accqurie?
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
I customed grpc-go as gatway proxy which remove IDL and implements
self-TLS1.3 with `cgo` for authenticor to proxy binary-grpc-call forwad to
inner binary-RPC.In production I find some unusual phenomenon when profile
KVM 4core + 8G + Intel Xeon E3-12xx v2 (Ivy Bridge) 2099Mhz
connections
version go 1.7
On Thursday, January 19, 2017 at 5:25:36 PM UTC+8, Zeymo Wang wrote:
>
> I fork grpc-go uses as gateway just enioy h2c benifit (also remove pb IDL
> feature),which I implement 0-RTT TLS( cgo invoke libsodium) repalce the
> standard TLS and handle request just do