Hi Srini, I was wondering if I could maybe ask you a few questions regarding grpc proxyless service mesh?
Thanks, Nemanja On Tuesday, August 17, 2021 at 11:34:28 PM UTC+1 Srini Polavarapu wrote: > Hi, > > The gRPC team did a one-time perf benchmarking to get some general idea. A > comprehensive and continuous benchmarking plan is on the roadmap. In the ad > hoc test, we tested gRPC 1.30 C++ xDS stack against Enovy that was compiled > with -c opt and -fno-omit-frame-pointer from the 1.14.1 tag. Envoy was run > with logging turned off entirely and with a default concurrency setting, > which creates one thread per CPU. This resulted in messages being balanced > across 8 threads in our set up. We were interested in the cost of a query > in terms of CPU-seconds, i.e., how much CPU time is required on the client > side (i.e. client process + sidecar) to transmit a single request. Load was > varied from 1K to 22K QPS with 1K-byte payload. > > Since this was not a comprehensive test and real world mileage depends on > many things, we don't want to publish data from this test but in general > you can expect to see 1.5-3x CPU savings in networking cost, i.e., the more > network intensive your application is, the higher the benefits. We didn't > test latency or memory utilization but you can find latency data in Istio > benchmarking > <https://istio.io/latest/docs/ops/deployment/performance-and-scalability/#latency> > . > On Monday, August 16, 2021 at 9:50:14 AM UTC-7 Gaurav Poothia wrote: > >> Hello, >> I saw a talk by Mark Roth from envoycon that talked about gRPC proxyless >> mesh having superior QPS per cpu second and latency than envoy all of which >> is of course expected. >> >> Can anyone pls share results/setup from benchmarks around these two >> metrics? >> It would be great to understand perf benefits more deeply. >> >> Thanks! >> Gaurav >> > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/29ed149c-aee4-4f0e-81c4-67ae3b9a9dc1n%40googlegroups.com.
