To be fair, we haven't spent a lot of time optimizing and benchmarking the windows codepath. Our first priority is Linux for GA, then other platforms after GA. We are running constant benchmark for Linux, but not Windows, for instance. But we'll get to it. Furthermore, the Windows platform and API have a few flaws that won't allow us to do as extensive optimizations as with Linux.
That being said, such a gap isn't healthy, and we should expect at least a somewhat constant function. The fact the step starts around 8KB however gives me a few ideas of where the problem might be. We'll instigate. Thanks for the reproduction steps and your report. On Thu, Jul 28, 2016, 08:07 <[email protected]> wrote: > Sure, > > > > You can use the grpc helloworld example for reproduction. > > In the greeter_client.cc replace the following lines: > > “ ClientContext context; > > > > // The actual RPC. > > Status status = stub_->SayHello(&context, request, &reply); > > “ > > With: > > > > for (int i = 1; i < 100; i++) > > { > > ClientContext context; > > > > request.Clear(); > > > > // Create some string with growing length for the request name > > int stringLen = i * 500; > > std::string str; > > str.*resize*(stringLen); > > request.set_name(str); > > > > // Say hello RPC with time measurement (I use <ctime>, you can > use different time measurement method) > > std::*clock_t* begin = std::*clock*(); > > *Status* status = stub_->SayHello(&context, request, &reply); > > std::*clock_t* end = std::*clock*(); > > > > // Print time result > > std::*cout* << "Byte size "<< request.ByteSize() <<" , Call > duration " << double(end - begin) / *CLOCKS_PER_SEC* << std::*endl*; > > } > > > > In the greeter_server.cc you basically don’t need to change anything but I > noticed that if the server replay with the same message length as the > request this issue doesn’t occur. > > Maybe it’s another clue for this issue. > > So, replace the > > “reply->set_message(prefix + request->name());” > > With simple > > reply->set_message("hello"); > > > > Run the client and server from a different computers and check the time > measurement prints. > > It will be great to see if someone else see the same phenomenon. > > > > Thanks > > Meir > > > > > > > > *From:* Christian Svensson [mailto:[email protected]] > *Sent:* Thursday, July 28, 2016 4:20 PM > *To:* Meir Vengrover > *Cc:* [email protected]; grpc.io; Assaf Waizman; Vadim Grenader; Lena > Solodkin > > > *Subject:* Re: [grpc-io] gRPC call time overhead > > > > Hi, > > > > Would you mind sharing the full code for server / client you're using as > well? That way people can easily try to reproduce the behavior you are > seeing. > > > > On Thu, Jul 28, 2016 at 3:06 PM, <[email protected]> wrote: > > Hi Louis, > > In order to simplify the case I tested very simple message. > > The proto file looks like: > > > > rpc SimpleMessage(SimpleByteArray) returns (Empty) {} > > > > message Empty { > > } > > > > message SimpleByteArray { > > bytes byte_array = 1; > > } > > > > Again I made iterations of growing message size from 500 bytes to 35KB and > again around 9K the call time started to jump to around 220ms. > > I also noticed that also in the large messages sometime the call time drop > again to around 20ms. > > The graph looks like: > > > > > > The serialization and deserialization time seems negligible, even for > large message (1MB) the serialization time is 2ms and the deserialization > is less than 1ms. > > I measured the serialization by measuring the function > > SerializationTraits<M>::Serialize(message, &send_buf_, &own_buf_); in > call.h; > > and the deserialization by measuring the function > > SerializationTraits<RequestType>::Deserialize(param.request, &req, param. > max_message_size); in method_handler_impl.h > > > > I will appreciate if you can give me more sampling places in the code in > order to profile the gRPC and understand better this issue. > > > > Thanks, > > Meir > > > > > > > > *From:* Louis Ryan [mailto:[email protected]] > *Sent:* Thursday, July 28, 2016 3:31 AM > *To:* Meir Vengrover > *Cc:* grpc-io > *Subject:* Re: [grpc-io] gRPC call time overhead > > > > A step function like that is pretty odd. What kind of payload are you > using? Just to isolate that out could you benchmark your message > serialization for the same range of sizes? > > > > As a general rule I would expect to see a benchmark of sequential > request/responses with 'typical' 1k protobufs between two normal servers on > the same LAN to be sub-ms. > > > > On Wed, Jul 27, 2016 at 7:11 AM, <[email protected]> wrote: > > Hello gRPC group, > > > > I’m working in Applied Materials SW team, and we’re checking the > possibility of replacing our RPC based CORBA with gRPC. > > My benchmark includes comparing gRPC vs CORBA performance while sending > various message sizes from client to server. > > The server replays immediately for each call. > > Both client and server are C++ based, and I’m using windows. > > I noticed that for small messages one RPC time is about ~20 ms between > the two computers (from the client call until it gets response). > > But from a certain message size the call time jump to ~220 ms. > > I see this 200ms overhead from above ~9500 bytes during connection, and > with message size over ~1070 when I just start connection. > > When performing many gRPC calls this 200ms overhead become a problem. > > Please see the following graph which represents call time vs message byte > size. > > > > > > When using CORBA (which is also based TPC) I don’t see such overhead, a > call takes ~15ms. > > > > Are you familiar with such behavior? > > Can you explain it? > > > > Thanks > > Meir Vengrover, > > SW engineer, Applied Materials. > > > > > > -- > You received this message because you are subscribed to the Google Groups " > grpc.io" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F81893F%40045-SN2MPN4-071.045d.mgd.msft.net > <https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F81893F%40045-SN2MPN4-071.045d.mgd.msft.net?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups " > grpc.io" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > > To view this discussion on the web visit > https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F818B7E%40045-SN2MPN4-071.045d.mgd.msft.net > <https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F818B7E%40045-SN2MPN4-071.045d.mgd.msft.net?utm_medium=email&utm_source=footer> > . > > > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups " > grpc.io" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F818BC7%40045-SN2MPN4-071.045d.mgd.msft.net > <https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F818BC7%40045-SN2MPN4-071.045d.mgd.msft.net?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CAOWnRi_rLWyt5eVr-cxLifGyVKXJHgDzOS_j1bMT3bgmLfm7dw%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
