This looks like an issue that has been seen in grpc-go earlier. What types 
of calls are these - unary? or streaming?

Indeed for unary calls, each call is currently flushing writes after 
sending headers and status for each call. The message portions of unary and 
streaming calls (split up into separate http2 data frames if it's large) 
both have some but only small amounts of batching of syscall.Writes.

There has been some work towards reducing this but I think the issue is 
probably somewhat expected for latest update. (There's one experimental 
solution to reducing unary call flushes 
in https://github.com/grpc/grpc-go/pull/973)

On Thursday, January 19, 2017 at 1:25:36 AM UTC-8, Zeymo Wang wrote:
>
> I fork grpc-go uses as gateway just  enioy h2c benifit (also remove pb IDL 
> feature),which I implement 0-RTT TLS( cgo invoke libsodium) repalce the 
> standard TLS and handle request just do http request to upstream; In 
> benchmark of bidirectional streaming rpc ,high cpu usage under not much 
> heavy load (maxConcurrencyStream = 100 or 1000 ,the same), according to "go 
> tool pprof ", I find syscall.wirte consume much cpu and RT ( maybe  cgo 
> performance?). At least 3 time call system.wrtie (flush) will cause this 
> problem (header + data + status)?Is orignal grpc  have this issue?how to 
> resolve or reduce invoke syscall.write?or waiting go add syscall.writev?
>
>
> <https://lh3.googleusercontent.com/-0TCAcilsguw/WICDnBBHKBI/AAAAAAAAB_k/2OtJVaBq9ykgXPKboM43S8PWR1OXT59oQCEw/s1600/perf.jpg>
>
>
> <https://lh3.googleusercontent.com/-2HXrQl6GgH0/WICENZODGQI/AAAAAAAAB_o/VUTPcgod4wQsI8Csoh7rVSBwcEe-n3yqQCLcB/s1600/strace.jpg>
>   
>   
>
>
> <https://lh3.googleusercontent.com/-IV3cZmIFYso/WICD8niuwnI/AAAAAAAAB_g/liXcah0inB4RQJxujk57SYxfzmjaVCvgQCEw/s1600/pprof.png>
>   
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/abd91fd7-f1ff-4b7b-bda6-5cd0a3a13018%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to