Re: [grpc-io] async grpc CQ loop

2018-09-08 Thread 'Sree Kuchibhotla' via grpc.io
>>To send a grpc reply upon receiving some data on socket, how can one get the tag? The tag is something (any void pointer) you pass to the API when starting an operation like read or write. The tag in-a-way is the Identifier for the operation you started. AsyncNext just returns the tag that you

[grpc-io] Re: What happens if I start an empty batch of grpc_op?

2018-08-15 Thread 'Sree Kuchibhotla' via grpc.io
Hi Neil, To me it sounds very reasonable to expect that if you call *grpc_call_start_batch* with empty batch, the tag is put in the completion queue right away (and thereby would kick a thread calling grpc_completion_queue_next()/pluck() to pick up that tag). However, since it is not

[grpc-io] Re: [grpc c++] sending requests immediately - avoid batching

2018-05-21 Thread 'Sree Kuchibhotla' via grpc.io
The set_write_through() option is a flag to let grpc know when to acknowledge write completion i.e whether to acknowledge it after bytes are sent on the wire (OR) whether to acknowledge once it cleared flow-control logic and grpc-transport is sure that they are going on the wire. It doesn't

[grpc-io] Re: Getting Target that client connected to

2018-05-21 Thread 'Sree Kuchibhotla' via grpc.io
gRPC does not return the target name (this is something the user/application should keep track of when creating the channel). however, the ClientContext::Peer() would return you the address of the exact backend the Call went to..

[grpc-io] Re: grpc::Channel thread safety

2018-05-21 Thread 'Sree Kuchibhotla' via grpc.io
Yes, those apis are thread safe too. On Wednesday, April 4, 2018 at 10:17:03 AM UTC-7, ncte...@google.com wrote: > > There is no need to synchronize around either of those APIs. > > On Tuesday, March 27, 2018 at 10:56:51 AM UTC-7, Khuzema Pithewan wrote: >> >> Hi, >> >> Thanks for doing this

[grpc-io] Re: Async clients and completion queues

2018-05-21 Thread 'Sree Kuchibhotla' via grpc.io
Completion queues can be shared among multiple clients. Yes, for shutdown, yes you got the sequence right On Thursday, March 22, 2018 at 11:15:13 AM UTC-7, Todd Defilippi wrote: > > I have a number of async clients that are connecting to services on > multiple async servers. Should each

Re: [grpc-io] Re: gRPC Performance

2018-03-08 Thread 'Sree Kuchibhotla' via grpc.io
Thanks for bringing this to our attention. The numbers look very low for C++ (especially for unary, its way too low). We are investigating.. thanks, Sree On Wed, Mar 7, 2018 at 4:38 PM 'Matt Kwong' via grpc.io < grpc-io@googlegroups.com> wrote: > +grpc-io > > Unfortunately, I'm not the best

Re: [grpc-io] quick question about pollset_kick() in ev_epoll1_linux.cc

2018-01-10 Thread 'Sree Kuchibhotla' via grpc.io
>> Though I'm still curious about the logic in that function, the forking issue notwithstanding: should that be an "assert not reached" on line 1098? Yes, the following line 1099 is redundant after the assert. SET_KICK_STATE(next_worker, KICKED); I will remove it. -Sree On Wed, Jan 10, 2018

Re: [grpc-io] quick question about pollset_kick() in ev_epoll1_linux.cc

2018-01-09 Thread 'Sree Kuchibhotla' via grpc.io
oh.. I didn't realize you were doing a fork() call. grpc actually does not support fork and is known to create strange issues like the one you reported. We did some work to mitigate some specific uses of fork ( https://github.com/grpc/grpc/pull/13025) but by and large it is not it is not

Re: [grpc-io] quick question about pollset_kick() in ev_epoll1_linux.cc

2018-01-09 Thread 'Sree Kuchibhotla' via grpc.io
Hi, This looks like a bug in the code. - Does this failure happen consistently ? - Could you give more details on your test that is causing this? (would be ideal if I can create something similar) Also, would it be possible for you to rerun your tests by replacing the assert with the

[grpc-io] Re: How build a service (one port) with ServerBuilder from a range of ports, using the next free? (C++)

2018-01-09 Thread 'Sree Kuchibhotla' via grpc.io
I apologize for the late response again. You probably already figured this out by now but You could also do builder -> AddListeningPort("*address*:0", creds, _port) (i.e pass port number as "0" when you pass the address to AddListeningPort API) and this picks an available port and returns

Re: [grpc-io] [grpc-core] assertion failed: cq_event_queue_num_items(>data.queue) == 0 when shutting down CQ

2017-12-05 Thread 'Sree Kuchibhotla' via grpc.io
No, it wouldn't. If you did not drain the cq, the shutdown won't even finish. -Sree On Tue, Dec 5, 2017 at 7:30 PM, wrote: > Hi, Sree > > If I didn't drain the CQ. If it is possible to behave like that? > > Thanks, > Yihao > > On Tuesday, September 19, 2017 at

[grpc-io] Re: How build a service (one port) with ServerBuilder from a range of ports, using the next free? (C++)

2017-12-05 Thread 'Sree Kuchibhotla' via grpc.io
Sorry for the late response. The BuildAndStart() does not promise you that it would return a "nullptr" if the port you passed is already in use. You need to use alternative ways to find an unused port yourself. You could do the check yourself by randomly selecting a port number, creating a

Re: [grpc-io] Server and client completion queue

2017-11-03 Thread 'Sree Kuchibhotla' via grpc.io
The async server here uses one cq for both client and server: https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_async_server.cc - The idea here is to use an object (one per request) to maintain the 'state' of the request. In the above example, it is the 'CallData' . You

[grpc-io] Re: [C++] Get grpc::status of async bi-directional stream that was closed by server

2017-10-10 Thread 'Sree Kuchibhotla' via grpc.io
Hi Eric, Can you paste snippets of your code on server side and the client side ? I am imagining you are doing something like the following: on server: server_stream->Finish(..) cq->AsyncNext(); on client: client_stream->Read(..) cq->AsyncNext() // Would have returned your tag

[grpc-io] Re: [C++] Get grpc::status of async bi-directional stream that was closed by server

2017-10-10 Thread 'Sree Kuchibhotla' via grpc.io
On Thursday, September 28, 2017 at 5:23:13 PM UTC-7, edelma...@gmail.com wrote: > > We have an async bi-directional streaming setup. We have implemented a > health check on the stream and have logic where the server decides a client > is stale or gone and closes the connection with that

Re: [grpc-io] gRPC frozen state

2017-09-20 Thread 'Sree Kuchibhotla' via grpc.io
Yes, GRPC_VERBOSITY=DEBUG. On Wed, Sep 20, 2017 at 12:10 PM, ryan via grpc.io wrote: > 1) Yep, will do - I should continue to run with GRPC_VERBOSITY=DEBUG right? > 2) Just merged latest master through > > > On Wednesday, September 20, 2017 at 9:32:54 AM UTC-7, Sree

Re: [grpc-io] gRPC frozen state

2017-09-20 Thread 'Sree Kuchibhotla' via grpc.io
The line does look a bit suspicious to me (It means the combiner has 30597 closures - which is a lot - the least significant bit in "last" is ignored - the actual count is stored in the remaining bits) D0919 14:54:36.312349629 51030 combiner.c:163] C:0x7f54faf35c00 grpc_combiner_execute

Re: [grpc-io] [grpc-core] assertion failed: cq_event_queue_num_items(>data.queue) == 0 when shutting down CQ

2017-09-19 Thread 'Sree Kuchibhotla' via grpc.io
Thanks for the info Yihao. This does look like a bug but I am not sure what might be happening. We did fix a bug in completion queue shutdown path ( https://github.com/grpc/grpc/pull/11703) but that is in version 1.6.0 and later *- *Would you mind upgrading to the latest release *1.6.1 *and

Re: [grpc-io] [grpc-core] assertion failed: cq_event_queue_num_items(>data.queue) == 0 when shutting down CQ

2017-09-15 Thread 'Sree Kuchibhotla' via grpc.io
It does look like a bug in the completion queue shutdown path. Can you share the test program you have been using to reproduce this ? thanks, Sree On Thu, Sep 14, 2017 at 6:23 PM, yihao yang wrote: > I0914 12:05:40.258058747 14907 completion_queue.c:764] >

[grpc-io] Re: [c++] channel.WaitForConnected assertion failed on pollset.polling_island != nullptr

2017-09-14 Thread 'Sree Kuchibhotla' via grpc.io
Hi Yihao, Looks like some bug in the completion queue shutdown path. This code has changed quite a bit since v1.0. So I recommend upgrading to the recent version of grpc. -Sree On Friday, September 8, 2017 at 7:41:32 PM UTC-7, yihao yang wrote: > > // get channel state first > I0711

Re: [grpc-io] Configuring number of worker threads spun by GRPC synchronous server C++

2017-09-11 Thread 'Sree Kuchibhotla' via grpc.io
Hi Anirudh, Sure you could go with latest version - but keep in mind that we MAY change num_cqs() to be equal to the number of cores in a future version. Same goes for other settings - but they are less likely to change though. So if having one completion queue and a minimum of one polling thread

Re: [grpc-io] Configuring number of worker threads spun by GRPC synchronous server C++

2017-09-11 Thread 'Sree Kuchibhotla' via grpc.io
Yes, it is elastic. It will dial the number of threads down to 1 thread (to be more precise, it will dial down to whatever is set in the ServerBuilder::SyncServerOption::MIN_POLLERS setting - which is "1" by default. Also, this is a "per-completion" queue setting) thanks, -Sree On Mon, Sep

Re: [grpc-io] Configuring number of worker threads spun by GRPC synchronous server C++

2017-09-11 Thread 'Sree Kuchibhotla' via grpc.io
Hi Anirudh, There is no direct way of reducing the number of worker threads. Btw, I am assuming you are using gRPC version earlier than the latest 1.6. If so, when you are using a Synchronous grpc server, it by default creates as many "completion queues" as the number of cores (In the latest

Re: [grpc-io] Blocking on multiple grpc completion queues

2017-08-31 Thread 'Sree Kuchibhotla' via grpc.io
Hi Akshita, Just taking a step back, If you need some way to monitor multiple channels, one pattern that you can use is to just create one completion queue but use it for all "calls" (across all channels). This way, you just have one completion queue and its easier to handle with one thread (i am

[grpc-io] Re: Single threaded async server in C++

2017-08-09 Thread 'Sree Kuchibhotla' via grpc.io
Hi Deepak, By closure I meant grpc_closure i.e the callback functions which contain most of the logic inside grpc core. thanks, Sree On Wed, Aug 9, 2017 at 5:23 PM, Deepak Ojha

[grpc-io] Re: Single threaded async server in C++

2017-08-08 Thread 'Sree Kuchibhotla' via grpc.io
Hi Deepak, grpc core internally creates two sets of thread pools : - Timer thread pool (to execute timers/alarms): Max of 2 threads. Typically just one. - Executor thread pool

Re: [grpc-io] Re: How to to close BiDi streaming gracefully from C++ server thread (pthr)when c++ client gets aborted.

2017-07-12 Thread 'Sree Kuchibhotla' via grpc.io
cts of TryCancel? Thanks, Yihao On Tue, Jul 11, 2017 at 5:05 PM, 'Sree Kuchibhotla' via grpc.io < grpc-io@googlegroups.com> wrote: > On async streams on servers, you simple call stream->Finish(const Status > <https://cs.corp.google.com/piper///depot/google3/third_party/grpc/googl

[grpc-io] Re: How to to close BiDi streaming gracefully from C++ server thread (pthr)when c++ client gets aborted.

2017-07-11 Thread 'Sree Kuchibhotla' via grpc.io
On async streams on servers, you simple call stream->Finish(const Status &

[grpc-io] Re: How to to close BiDi streaming gracefully from C++ server thread (pthr)when c++ client gets aborted.

2017-06-27 Thread 'Sree Kuchibhotla' via grpc.io
Sorry for the late response. There is no special method to 'close' the BiDi streams. On the server, just returning a status would mean that you are done with the stream. However, in the example you have given, you seem to be calling just 1 read. Since you mentioned you are noticing a memory

Re: [grpc-io] gRPC core Architecture

2017-04-28 Thread 'Sree Kuchibhotla' via grpc.io
Understood. Not sure why gdb is not showing linenumbers to you.. but I build grpc library by setting the environment variable "CONFIG = dbg". (Most of the time I am lazy and just have our test script do the build for me .. i.e do $tools/run_tests/run_tests.py -lc -cdbg --build_only $

Re: [grpc-io] thread manager unlimited number of threads

2017-04-28 Thread 'Sree Kuchibhotla' via grpc.io
Hi Siyuan, Yes, it would be nice to limit the total number of threads; It was a minor design oversight and we do intend to fix it at some point in future (happy to take a pull request). thanks Sree On Thu, Apr 27, 2017 at 11:54 AM, wrote: > Hi all, > > I was playing with

Re: [grpc-io] gRPC core Architecture

2017-04-27 Thread 'Sree Kuchibhotla' via grpc.io
Hi Rajarshi, if you are planning to use grpc and build something on top of it, I would recommend just starting with example programs: https://github.com/grpc/grpc/tree/master/examples/cpp If you are planning to understand the internals and want to contribute, unfortunately there is no doc that

[grpc-io] Re: gRFC for (C-Core) completion queue API changes

2017-01-24 Thread 'Sree Kuchibhotla' via grpc.io
https://github.com/sreecha/proposal/blob/b486fe220fd06f90b79df6c24c323b89fe495f8d/cq-changes.md is a better link to view the doc On Tuesday, January 24, 2017 at 1:36:20 AM UTC-8, Sree Kuchibhotla wrote: > > I have created a gRFC for completion queue API changes (in C-Core, C++ and > other

[grpc-io] gRFC for (C-Core) completion queue API changes

2017-01-24 Thread 'Sree Kuchibhotla' via grpc.io
I have created a gRFC for completion queue API changes (in C-Core, C++ and other wrapped languages) https://github.com/grpc/proposal/pull/6 Please let me know your comments on this thread. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To

Re: [grpc-io] what to do when ServerCompletionQueue::Next return false?

2017-01-23 Thread 'Sree Kuchibhotla' via grpc.io
In addition to what Vijay Said, I wanted to address the following you mentioned: * >> If I do nothing in this case, CallData exhaust after a while and the program hang at the 3rd line (the start of the while loop).* By the way, I am assuming you are using the grpc async server example from

Re: [grpc-io] Performance dashboard moved or down?

2016-10-27 Thread 'Sree Kuchibhotla' via grpc.io
Hi Chad, The dashboard link from the blogpost was for 1.0 branch perf - it was broken until a couple of days back. We also run continuous perf benchmarks on master and the dashboard is here: https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5760820306771968 thanks, Sree On