Hi,
For bi-directional streaming, can I have both pending read and write
operations at the same time (1 of each)?
All the examples I found so far seem to do ping-pong behavior (1 read
followed by 1 write) but I don't recall reading any documentation that
mentions this constraint on protocol
pong is not required for any gRPC
> streaming operations.
>
> On Wed, Feb 22, 2017 at 4:45 PM Arpit Baldeva <abald...@gmail.com> wrote:
>
>> Hi,
>>
>> For bi-directional streaming, can I have both pending read and write
>> operations at the same time (
You don't need to loop through all the message types. Protobuf generates a
'case' method which would tell you the case/type of the message. You should
use a switch statement in your code using that generated method.
On Monday, February 13, 2017 at 2:30:11 PM UTC-8, jing...@dialpad.com wrote:
>
Apologies for multiple emails. It just occurred to me that the topic of my
post is incorrect. What I am really looking for is the client/peer
connection establishment/closed event (and not really the stream id of rpc
call - I am having a hard time trying to recall why I asked for that in the
first
Hi,
Are there any plans to support interceptors in C++? It looks like they are
supported in GO.
Does the server support rate limiting support per IP or is this
functionality expected to be built at application level? If application
level, interceptors would be handy here.
Thanks
Arpit
--
Hi,
Is there a doc that details the server side connection management? I
see
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
but it is more applicable on the client side.
I am trying to figure out details like how long does a socket remain active
for an
Hi,
I am investigating the viability of using gRPC (C++ version) to replace our
existing RPC framework at Electronic Arts and would appreciate if somebody
can provide me some answers in order for me to make the best judgement. For
most of our existing usage, I have an idea of how to do that
dding some metric APIs
in the various implementations recently.
I was mostly curious about what was already available. It'd nice to know
the amount of sent/received bytes per peer.
On Monday, December 5, 2016 at 3:40:41 PM UTC-8, Eric Anderson wrote:
>
> On Thu, Dec 1, 2016 at 3:40 PM, Arpi
With Synchronous API, for integrators interested in having a little more
control on the behavior of the thread pool (so for example, assigning the
thread to the a particular core), they can define
GRPC_CUSTOM_DEFAULT_THREAD_POOL and provide the implementation of the
thread pool interface
For Go, this thread claimed that something like this would be available in
Q1 2017 - https://groups.google.com/forum/#!topic/grpc-io/C0rAhtCUhSs
Regardless of the language (I am working with C++ impl), my plan to
implement this functionality is also RPC based (streaming Ping rpc). On
server
Hi,
Using version 1.4.2.
I see GRPC_ARG_MAX_CONCURRENT_STREAMS option to limit concurrent streams
per connection. Is there an option available(I did not find any) or planned
to be introduced in the future that would allow for setting max connections
on server. Any new incoming connection
similar to what you have, where every Service type implements its
> own CallData/TagInfo class to do the processing it needs to do for each
> function. Does all this sound okay?
>
>
> On Thursday, July 20, 2017 at 1:43:13 PM UTC-7, Arpit Baldeva wrote:
>>
&g
Hi,
Is there a release schedule page that I can follow for upcoming release
dates? I am interested in knowing the next release due date (tentative is
fine).
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group
You can add as many services as you like to a single server (using
RegisterService call).
Check out the following post for a better (but complicated sample)
https://groups.google.com/forum/#!topic/grpc-io/DuBDpK96B14 . It does not
show multiple services but that part is not complicated.
On
Here is a more complex example :)
- https://groups.google.com/forum/#!topic/grpc-io/DuBDpK96B14
It implements server side but concepts remain closely the same.
On Wednesday, April 26, 2017 at 11:16:31 PM UTC-7, Anirudh Kasturi wrote:
>
> Thanks Kuldeep !
>
> On Apr 26, 2017 11:14 PM,
lue for the 'ok' bool. For example, when client finishes streaming
request, the read rpc tag's 'ok' bool will be false (and other scenarios
for other tags when client timed out or cancelled).
On Fri, Aug 18, 2017 at 3:09 PM, Arpit Baldeva <abald...@gmail.com> wrote:
> Thanks for the response.
Hi,
In async model, from my experiments, when server shuts down, 'ok' will be
false. Is there any other scenario in which it can be false?
When the server is shutting down, I don't want to queue up another rpc
request. But if the 'ok' boolean can be false in some other scenario(is
there a
xt().
>
> (Other that we've observed ok to be false as described above, whereas
> we've never observed the result of Next() to be false
> unless the server was actually shutting down).
>
> Best,
>
> On Fri, Aug 18, 2017 at 11:17 PM, Arpit Baldeva <abal...@gmail.com
> >
the net because the CLA hadn't been verified. Will get back
>> to you with review comments ASAP.
>>
>> On Thu, May 11, 2017 at 7:12 PM, Varun Talwar <varun...@google.com
>> > wrote:
>>
>>> Assigned to Lisa who can help review and merge the docs.
>&
Hi,
Is there any recommended process to get the PR approved and merged? I
submitted a PR a while back (https://github.com/grpc/grpc/pull/10919 ) and
it is still waiting for a reviewer and CLA verification. I am also unsure
how to select/assign a person as the reviewer.
Thanks.
--
You
to the PR.
>
>
>
> On Thu, May 11, 2017 at 11:09 AM, Arpit Baldeva <abal...@gmail.com
> > wrote:
>
>> Hi,
>>
>> Is there any recommended process to get the PR approved and merged? I
>> submitted a PR a while back (https://github.com/grpc/grpc/pull/10
Hi,
Reference: https://github.com/grpc/grpc/issues/10132
I'd like to contribute the changes for this issue. My plan is to change the
"Raw" string to "GrpcRaw". Is there a particular reviewer who can be added
to the issue?
Thanks.
--
You received this message because you are subscribed to
I work in C++ but I think the strategy we have can be adopted in any
language.
We use Grpc status code for the errors for very limited system level
failures. This way, the mapping from our application logic to grpc status
codes is limited. An incentive for doing this is streaming rpcs where we
ess": The
> *RPC* is successful, even if the application request was not).
>
> Thanks!
>
> Evan
>
>
>
> On Thursday, September 14, 2017 at 12:52:28 PM UTC-4, Arpit Baldeva wrote:
>>
>> I work in C++ but I think the strategy we have can be adopted in any
&
Hi,
I see an occasional when shutting down my server (version 1.4.2, VS 2015,
Windows).
My set up: I use async api and have 2 threads.
1. Thread 1 processes the tags from the completion queue. It also executes
the shutdown request.
2. Thread 2 plucks the tags from the completion queue and
to be wrong.
Thanks.
On Wednesday, September 20, 2017 at 2:49:53 PM UTC-7, Yang Gao wrote:
>
> Did you destruct the mServer and mCQ before destroying the rpc? If you
> keep either one living after the rpc's are all destroyed, does it still
> crash?
>
>
>
> On Wed, Sep 20
On Thursday, October 5, 2017 at 9:29:36 AM UTC-7, Arpit Baldeva wrote:
>>
>> Yes, I do turn on the trace for api (and some other categories) and set
>> the verbosity level to INFO.
>>
>> I thought of the various GRPC_TRACE variables as simply the categories I
>>
n INFO.
> - Vijay
>
> On Wed, Oct 4, 2017 at 3:52 PM Arpit Baldeva <abal...@gmail.com
> > wrote:
>
>> Hi,
>>
>> Tested on 1.4.2.
>>
>> Currently, grpc has 3 logging levels.
>>
>> GPR_LOG_SEVERITY_DEBUG,
>> GPR_LOG_SEV
In case of a connection error this reading routine
>>> will get an error.
>>> We're calling this mechanism a point-to-point healthcheck. The
>>> client-side work is done and the server-side is underway.
>>>
>>>
>>> On Friday, March 3, 2017 at 9:21:41 AM UTC-
on has gone away.
> Even gRPC can't really be sure when a connection is gone (we can guess).
> Like I suggested before, why not send a "Going away" message just before
> disconnect? This is the same solution that HTTP/2 uses under the hood
> (called "go away").
>
Hi,
After reading
https://github.com/ejona86/proposal/blob/a339b01be9eafffb1adc4db8c782469caed18bdc/A9-server-side-conn-mgt.md
, I am looking for a small clarification.
It looks like the connections are not considered idle if they have
outstanding rpcs. That would mean it includes server
Hi,
Currently, I set the desired configuration of the server before starting up
mServerBuilder.AddChannelArgument(GRPC_ARG_MAX_RECEIVE_MESSAGE_LENGTH,
maxIncomingMessageSize);
I did not see an obvious way to modify these arguments after the server has
started and is running for some time. Is
Hi,
Version - 1.4.2 - Windows - C++.
I noticed some oddities with the server log and want to ensure what I am
seeing is intended and not a bug.
I have multiple completion queues in my server and a thread is dedicated to
block on each completion queue separately. I am seeing debug output from
valid message that got read. If not, you know that there are
> certainly no more messages that can ever be read from this stream.
>
> Client-side Finish: ok should always be true
>
> Server-side AsyncNotifyWhenDone: ok should always be true
>
> HTH!
>
> - Vijay
>
&
Hi,
Tested on 1.4.2.
Currently, grpc has 3 logging levels.
GPR_LOG_SEVERITY_DEBUG,
GPR_LOG_SEVERITY_INFO,
GPR_LOG_SEVERITY_ERROR
IMHO, currently GPR_LOG_SEVERITY_INFO logs too much and is unsuitable for
use in prod scenario. INFO to me means that something interesting happened
which
Hi,
Any ETA?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to
Hi,
Is it a known issue that grpc_cli does not support enums?
I have tried a few variations:
* grpc_cli type 127.0.0.1:50051 Greeter.GreetingType
* grpc_cli type 127.0.0.1:50051 .Greeter.GreetingType
etc.My finding is also consistent
with
Hi,
Currently, looks like rpc urls are generated in the form of //. Is there a way to prefix the urls such that grpc client
could call //? The receiver can chop
off the prefix and call the correct rpc.
Use case: We'd like to have a proxy sitting between our client and grpc
server. The proxy
Hi,
Does grpc offer any guidelines around the compatibility between grpc and
protobuf? Looks like recently when the 1.6 version bumped the protobuf
support to 3.4.0, it started using a new php namespace option from it
without putting in a version guard. This means that a prior version of
Check
out
https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/grpc-io/k6QFNxWDmv0/AVpTJ3LMEAAJ
to see how we deal with this particular problem (Surfacing bi-directional
streaming error without breaking the stream).
On Monday, October 30, 2017 at 9:44:02 AM UTC-7,
There are many scenarios when application would want to see custom error
codes even if just for logging/better visibility. The pattern I use:
1. Stick a google::rpc::status in every response message.
2. Create a Message with embedded enums next to the Service definition in
the proto file. Each
Hi,
Previous discussion here
-
https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/grpc-io/qOJaIoIzAu0/jdN9VYFLAAAJ
It seems like there is a discrepancy in the way grpc shutdown API works
between Java and C++. From Java docs,
shutdown()
Initiates an orderly shutdown in
Hi,
Can you comment on the status of the C/C++ port? I searched on the forum
for recent conversations but nothing useful turned up.
Thanks.
On Monday, February 5, 2018 at 2:32:25 PM UTC-8, Carl Mastrangelo wrote:
>
> This has been merged, and is being implemented actively in Java and Go,
>
Client IP is not a reliable way to keep track of client. For example, you
could have many clients sitting behind a proxy and they will share the ip
address. I only use it for logging purpose. So depends on your use case.
On Mon, Jan 8, 2018 at 4:58 PM, 'Menghan Li' via grpc.io <
the information available from context.Conetext() I was not able to
> distinguish different clients from the different/same remote.
>
> On Sat, Jan 6, 2018 at 1:42 PM Arpit Baldeva <abal...@gmail.com
> > wrote:
>
>> My question was mainly around how to make sure the
he ServerContext (rather than just shutdown the server).
> If that is not the case, then I think my previous thought was not right.
>
>
>
> On Wed, Dec 20, 2017 at 11:33 AM, Arpit Baldeva <abal...@gmail.com
> > wrote:
>
>> Hi,
>>
>> I have looked at the
Hi,
Any feedback on this?
Thanks.
On Thursday, January 4, 2018 at 10:05:56 AM UTC-8, Arpit Baldeva wrote:
>
> Hi Yang,
>
> Sorry for the delayed reply. I was on vacation.
>
> Let me restate the problem very simply - Currently, I can't call
> grpc::Server::Shutdown with
What you need to use is the GRPC_ARG_MAX_CONNECTION_IDLE_MS option.
However, that option is currently buggy. Before 1.9.0, it could cause a
crash and starting 1.9.0, it could cause memory leak if enabled. See this
issue - https://github.com/grpc/grpc/pull/13594
On Saturday, February 10, 2018
Hi,
I have looked at the gRFC and not too sure how it fixes some of the current
issues with library shutdown (you referenced the issues I mentioned on
GitHub/private conversations).
First off, I don't know about other users but I was already forced to use
grpc_init/shutdown in my code (so I
Okay, looks like this bug is officially open
at https://github.com/grpc/grpc/issues/10136 . It has been open for a
while. Are there any plans on fixing this?
On Wednesday, August 1, 2018 at 11:48:59 AM UTC-7, Arpit Baldeva wrote:
>
> Hi,
>
> Based on my previous knowledge a
Hi,
Based on my previous knowledge and reference, the AsyncNotifyWhenDone tag
added by ServerContext::AsyncNotifyWhenDone should be received by the
application when Server::Shutdown is called. *This is supposed to be the
case for the rpcs that have been queued up for processing but not started
Any information on this?
Thanks.
On Thursday, March 1, 2018 at 3:41:58 PM UTC-8, Arpit Baldeva wrote:
>
> Hi,
>
> Looks like C core added cert reload support (
> https://github.com/grpc/grpc/pull/12644) but C++ api does not expose the
> functionality? Am I miss
Arpit,
>
> grpc_init initializes OpenSSL for a short period (~2 days) and the code
> was later removed. Do you still the problem, if you fetch the latest master?
>
> On Monday, April 16, 2018 at 2:22:32 PM UTC-7, Arpit Baldeva wrote:
>>
>> Hi,
>>
>> I recentl
problems on OpenSSL init.
>
> For OpenSSL 1.0x, it is a valid concern. Let me check what is the best way
> to resolve this issue (pass a compiler flag, environment variable, or some
> API changes).
>
> Thanks,
> Jiangtao
>
>
> On Wed, Apr 18, 2018 at 7:34 PM Arpi
Hi,
I recently pinned down a sporadic race condition in my application due to
grpc intializing OpenSSL internally. The problem is that OpenSSL has some
global callbacks that grpc is trying to initialize on it's own without the
authorization of the application. The problem is in the
The code you have on server side looks correct to me(I have pretty much the
same code).
Have you loaded the root cert for the server on the client (the CA that
issued the cert to the server)? On client side, code could look like:
std::string rootCerts;
gt; time is unsafe. And it is not easy to guarantee that this won't happen if
> grpc is doing it under the covers.
>
> On Mon, Apr 23, 2018 at 10:43 PM, Arpit Baldeva <abald...@gmail.com>
> wrote:
>
>> Grpc does not un-initialize OpenSSL. If you have other thread that i
es your problem.
>
> On Friday, April 20, 2018 at 9:30:34 AM UTC-7, Arpit Baldeva wrote:
>>
>> I am on 1.0.2k so yeah it is a problem on that version.
>>
>> I think the simplest fix is what I mentioned in last email - grpc
>> init_openssl implementation can chec
2018 at 4:18:42 PM UTC-5, Jiangtao Li wrote:
>>> Good to know. Once the patch approved and merged. It will be in next grpc
>>> release.
>>>
>>>
>>> Thanks,
>>> Jiangtao
>>>
>>>
>>>> On Mon, Apr 23, 201
Check out the example I added
at https://groups.google.com/forum/#!topic/grpc-io/T9u2TejYVTc
As for 300-400 rpcs, you can write a custom code generator that plugs in to
ProtoC (much like grpc_cpp_plugin) and have it generate additional code
that you may need (like auto "requesting" your
You say micro-services and then say your server will have 300-400 rpcs? Are
they part of the same service or many independent services and you are just
trying to get a common framework together?
I had a similar problem in my application which is/was largely a monolith
and I had to add gRPC
king on finding developer time to
> work on this, but currently do not have an ETA to provide to you.
>
> Justin
>
>
> On Thu, Mar 8, 2018 at 10:43 AM, Arpit Baldeva <abal...@gmail.com
> > wrote:
>
>> Any information on this?
>>
>> Thanks.
>>
&
Hi,
Looks like C core added cert reload support
(https://github.com/grpc/grpc/pull/12644) but C++ api does not expose the
functionality? Am I missing something here or this is in the works?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io"
Hi,
What does a grpc pre-release mean? For example, what level of testing does
it go through vs a release that is not marked pre-release? Can the api
change between a pre-release vs release?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io"
rs don't. Or is that field
> only used when I am setting the SSL options for a client?
>
> Thanks,
> Todd
>
> On Tuesday, April 24, 2018 at 12:29:10 PM UTC-7, Arpit Baldeva wrote:
>>
>> The code you have on server side looks correct to me(I have pretty muc
Feel free to take a look at this thread
- https://groups.google.com/d/topic/grpc-io/T9u2TejYVTc/discussion
I attached a C++ implementation of the RouteGuide Async server there. That
code avoids lot of boiler plate and integrates nicely with any threading
architecture you want. The code was
an't call a pure virtual from
> a ctor).
>
> With this, HandleRpcs basically contains only 2 lines:
> - cq_->Next(, )
> - static_cast(tag)->Proceed();
>
> On Monday, October 15, 2018 at 11:05:54 AM UTC+2, Stephan Menzel wrote:
>>
>> Am Freitag,
You should be looking to run the grpc server in the async mode. That'd make
sure that there is not a thread per server streaming rpc (you control the
threading model).
On Sunday, October 21, 2018 at 4:48:24 AM UTC-7, Michael Martin wrote:
>
> Hello,
> I choose grpc to replace a REST data
Reattached -
On Sat, Sep 8, 2018 at 4:57 AM Arthur Wang wrote:
>
> Hi Arpit :
>
> Can't view or download your example code . Is that because far too early
> from now ? Where else can I view them for now ?
>
> Thanks a lot.
>
>
> On Thursday, March 23, 2017 at 6
69 matches
Mail list logo