s/first request/first requests/

On Tuesday, January 3, 2023 at 2:57:21 PM UTC-8 Sergii Tkachenko wrote:

> Just an idea - did you try to run `ghz` with the `--async` flag. It might 
> make sense to play around with `--skipFirst` flag as well, so that first 
> request to not pre-warmed JVM do not bias the result.
> Also - looks like REST wrk benchmark uses 400 connections, while gRPC ghz 
> just one? Consider trying `--connections=400` ghz argument.
> Docs: https://ghz.sh/docs/usage 
>
> On Friday, December 30, 2022 at 8:35:44 AM UTC-8 gordan...@steatoda.com 
> wrote:
>
>> Out of curiosity, I decided to compare performance of making a gRPC call 
>> vs. making a REST call. To my surprise, gRPC turned out to be several times 
>> slower. I'm hoping that I'm just missing something obvious.
>>
>> Repo with tests:
>>
>> https://github.com/gkresic/muddy-waters
>>
>> Build it with (you'll need Java 17 somewhere on the path):
>>
>> ./gradlew build
>>
>> In that repo there are various REST clients implemented using different 
>> Java REST libs and frameworks and the one that uses gRPC is named 
>> 'plankton'.
>>
>> General benchmark across all subprojects is to send multiple objects 
>> (called 'Payload' in sources) with one integer and one textual field and to 
>> receive only one such object as response (method for calculating that 
>> response is deliberately trivial and not important here). REST services are 
>> tested with wrk (https://github.com/wg/wrk) and gRPC with ghz (
>> https://ghz.sh/). Just to rule out ghz as the cause of the low 
>> performances, I've implemented my own simple Java gRPC client benchmark.
>>
>> To run plankton:
>>
>> cd plankton/build/install/plankton/
>> bin/plankton
>>
>> Benchmark using ghz (from repo root):
>>
>> ghz --insecure --proto=plankton/src/main/proto/payload.proto 
>> --call=muddywaters.plankton.EatService/EatStream --duration=10s 
>> --duration-stop=wait --data-file=payload-10.json localhost:17001
>>
>> On my machine it gives me ~9k requests/sec.
>>
>> Now compare this to 'dolphin' subproject which implements REST endpoint 
>> using Vert.x that is based on same Netty as gRPC:
>>
>> cd dolphin/build/install/dolphin/
>> bin/dolphin
>>
>> Benchmark using wrk (from repo root):
>>
>> wrk -t4 -c400 -d10s -s payload-10.lua http://localhost:16006/eat
>>
>> It easily goes above 100k requests/sec.
>>
>> To explore further, I wrote a simplest possible gRPC service that accepts 
>> empty message ('Void') and returns that same message as response, just to 
>> minimize the effect of message encoding/decoding/processing. You can test 
>> it with:
>>
>> ghz --insecure --proto=plankton/src/main/proto/ping.proto 
>> --call=muddywaters.plankton.PingService/Ping --duration=10s 
>> --duration-stop=wait localhost:17001
>>
>> However, even that most simple service of all maxes out at ~14k 
>> requests/sec.
>>
>> Like I said, I wrote my own benchmark client that run against three 
>> services:
>>
>> * Ping: receives empty message and returns it in response
>> * EatOne: receives one Payload and returns one Payload
>> * EatStream: receives stream of Payloads and returns one Payload - gRPC 
>> implementation of my "standardized" test
>>
>> Run it with (from repo root):
>>
>> ./gradlew :plankton:benchmark
>>
>> It will run all three tests *three* times, to rule out JVM JITing from 
>> the calculations. However, even that benchmark is not much faster:
>>
>> Ping: 31k requests/sec
>> EatOne: 30k requests/sec
>> EatStream: 14k requests/sec (reminder: REST implementation from 'dolphin' 
>> subproject gives over 100k requests/sec for the same functionality)
>>
>> What am I missing?
>>
>> -gkresic.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/db1a0ae0-36b1-49d3-af5b-f636b4550a95n%40googlegroups.com.

Reply via email to