Apologies for multiple emails. It just occurred to me that the topic of my
post is incorrect. What I am really looking for is the client/peer
connection establishment/closed event (and not really the stream id of rpc
call - I am having a hard time trying to recall why I asked for that in the
first
Hi,
Is there a doc that details the server side connection management? I
see
https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
but it is more applicable on the client side.
I am trying to figure out details like how long does a socket remain active
for an inactiv
Hi,
I am investigating the viability of using gRPC (C++ version) to replace our
existing RPC framework at Electronic Arts and would appreciate if somebody
can provide me some answers in order for me to make the best judgement. For
most of our existing usage, I have an idea of how to do that via
We've been adding some metric APIs
in the various implementations recently.
I was mostly curious about what was already available. It'd nice to know
the amount of sent/received bytes per peer.
On Monday, December 5, 2016 at 3:40:41 PM UTC-8, Eric Anderson wrote:
>
> On T
Hi,
Are there any plans to support interceptors in C++? It looks like they are
supported in GO.
Does the server support rate limiting support per IP or is this
functionality expected to be built at application level? If application
level, interceptors would be handy here.
Thanks
Arpit
--
Y
Hi,
resurrecting this old thread.
I am a bit puzzled that this boilerplate code is not auto-generated (or
there is no option for it) and wondering what is the reason behind it?
There can be different patterns to generate this boilerplate but at least
one could be provided by default. Is async
You don't need to loop through all the message types. Protobuf generates a
'case' method which would tell you the case/type of the message. You should
use a switch statement in your code using that generated method.
On Monday, February 13, 2017 at 2:30:11 PM UTC-8, jing...@dialpad.com wrote:
>
>
Hi,
This post is around the rpc sequence issued by a single client.
For the sync model, I understand that rpc call order guarantee can't be
maintained due to a pool of threads executing concurrently. The sync model
is not suitable for my use case for other reasons and I was looking at the
asy
order of
> matching (because we get a faster implementation), but preserve the order
> of actual requests presented to the application.
>
> On Wed, Feb 15, 2017, 1:56 PM Arpit Baldeva > wrote:
>
>> Hi,
>>
>> This post is around the rpc sequence issued by a sin
Hi,
For async calls (say a write call - void grpc::ServerAsyncWriter< W
>::Write( const W & msg, void * tag ), do I need to keep the msg around
until I get the tag back from completion queue?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" grou
Hi,
For bi-directional streaming, can I have both pending read and write
operations at the same time (1 of each)?
All the examples I found so far seem to do ping-pong behavior (1 read
followed by 1 write) but I don't recall reading any documentation that
mentions this constraint on protocol le
r any gRPC
> streaming operations.
>
> On Wed, Feb 22, 2017 at 4:45 PM Arpit Baldeva wrote:
>
>> Hi,
>>
>> For bi-directional streaming, can I have both pending read and write
>> operations at the same time (1 of each)?
>>
>> All the examples I found so
With Synchronous API, for integrators interested in having a little more
control on the behavior of the thread pool (so for example, assigning the
thread to the a particular core), they can define
GRPC_CUSTOM_DEFAULT_THREAD_POOL and provide the implementation of the
thread pool interface (which
For Go, this thread claimed that something like this would be available in
Q1 2017 - https://groups.google.com/forum/#!topic/grpc-io/C0rAhtCUhSs
Regardless of the language (I am working with C++ impl), my plan to
implement this functionality is also RPC based (streaming Ping rpc). On
server si
me folks looking to update the example code also...
> I'm going to have them jump on this thread for where to go with the code.
>
> On Thursday, March 2, 2017 at 11:34:04 AM UTC-8, Arpit Baldeva wrote:
>>
>> Hi,
>>
>>
>>
>> Recently, I have been look
Here is a more complex example :)
- https://groups.google.com/forum/#!topic/grpc-io/DuBDpK96B14
It implements server side but concepts remain closely the same.
On Wednesday, April 26, 2017 at 11:16:31 PM UTC-7, Anirudh Kasturi wrote:
>
> Thanks Kuldeep !
>
> On Apr 26, 2017 11:14 PM, "Kuldeep
Hi,
Is there any recommended process to get the PR approved and merged? I
submitted a PR a while back (https://github.com/grpc/grpc/pull/10919 ) and
it is still waiting for a reviewer and CLA verification. I am also unsure
how to select/assign a person as the reviewer.
Thanks.
--
You receive
a pointer to the PR.
>
>
>
> On Thu, May 11, 2017 at 11:09 AM, Arpit Baldeva > wrote:
>
>> Hi,
>>
>> Is there any recommended process to get the PR approved and merged? I
>> submitted a PR a while back (https://github.com/grpc/grpc/pull/10919 )
>>
he CLA hadn't been verified. Will get back
>> to you with review comments ASAP.
>>
>> On Thu, May 11, 2017 at 7:12 PM, Varun Talwar > > wrote:
>>
>>> Assigned to Lisa who can help review and merge the docs.
>>>
>>>
>>> On Thu, M
Hi,
Reference: https://github.com/grpc/grpc/issues/10132
I'd like to contribute the changes for this issue. My plan is to change the
"Raw" string to "GrpcRaw". Is there a particular reviewer who can be added
to the issue?
Thanks.
--
You received this message because you are subscribed to t
Hi,
Is there a release schedule page that I can follow for upcoming release
dates? I am interested in knowing the next release due date (tentative is
fine).
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group an
This is for gRPC C++.
On Thursday, July 20, 2017 at 1:31:01 PM UTC-7, Arpit Baldeva wrote:
>
> Hi,
>
> Is there a release schedule page that I can follow for upcoming release
> dates? I am interested in knowing the next release due date (tentative is
> fine).
>
> Tha
You can add as many services as you like to a single server (using
RegisterService call).
Check out the following post for a better (but complicated sample)
https://groups.google.com/forum/#!topic/grpc-io/DuBDpK96B14 . It does not
show multiple services but that part is not complicated.
On We
t way to do that is
> something similar to what you have, where every Service type implements its
> own CallData/TagInfo class to do the processing it needs to do for each
> function. Does all this sound okay?
>
>
> On Thursday, July 20, 2017 at 1:43:13 PM UTC-
> On Friday, 21 July 2017 11:19:30 UTC-7, Carl Mastrangelo wrote:
>>
>> +David
>>
>>
>> On Thursday, July 20, 2017 at 1:32:10 PM UTC-7, Arpit Baldeva wrote:
>>>
>>> This is for gRPC C++.
>>>
>>> On Thursday, July 20, 2017 at 1:31:01 PM U
Hi,
The server side gRFC
-
https://github.com/ejona86/proposal/blob/a339b01be9eafffb1adc4db8c782469caed18bdc/A9-server-side-conn-mgt.md
- does not mention that both connection age and connection idle need to be
specified together. So shouldn't the following code use an || instead of &&?
stat
Hi,
Using version 1.4.2.
I see GRPC_ARG_MAX_CONCURRENT_STREAMS option to limit concurrent streams
per connection. Is there an option available(I did not find any) or planned
to be introduced in the future that would allow for setting max connections
on server. Any new incoming connection reque
Hi,
In async model, from my experiments, when server shuts down, 'ok' will be
false. Is there any other scenario in which it can be false?
When the server is shutting down, I don't want to queue up another rpc
request. But if the 'ok' boolean can be false in some other scenario(is
there a con
s the difference between the 'ok' tag and the return value of Next().
>
> (Other that we've observed ok to be false as described above, whereas
> we've never observed the result of Next() to be false
> unless the server was actually shutting down).
>
> Best,
>
possible for to have a false
value for the 'ok' bool. For example, when client finishes streaming
request, the read rpc tag's 'ok' bool will be false (and other scenarios
for other tags when client timed out or cancelled).
On Fri, Aug 18, 2017 at 3:09 PM, Arpit Ba
error this reading routine
>>> will get an error.
>>> We're calling this mechanism a point-to-point healthcheck. The
>>> client-side work is done and the server-side is underway.
>>>
>>>
>>> On Friday, March 3, 2017 at 9:21:41 AM UTC-8, Ar
.
> Even gRPC can't really be sure when a connection is gone (we can guess).
> Like I suggested before, why not send a "Going away" message just before
> disconnect? This is the same solution that HTTP/2 uses under the hood
> (called "go away").
>
> On W
io would listening
>> on context.Done() not work?
>>
>> As of now I'm happy to listen on context.Done() and implement a heartbeat
>> RPC, but am curious to know when context.Done() will fail.
>>
>>
>> On Thursday, August 31, 2017 at 2:59:10 AM UTC+5, Arpi
Hi,
After reading
https://github.com/ejona86/proposal/blob/a339b01be9eafffb1adc4db8c782469caed18bdc/A9-server-side-conn-mgt.md
, I am looking for a small clarification.
It looks like the connections are not considered idle if they have
outstanding rpcs. That would mean it includes server str
I work in C++ but I think the strategy we have can be adopted in any
language.
We use Grpc status code for the errors for very limited system level
failures. This way, the mapping from our application logic to grpc status
codes is limited. An incentive for doing this is streaming rpcs where we
PC* is successful, even if the application request was not).
>
> Thanks!
>
> Evan
>
>
>
> On Thursday, September 14, 2017 at 12:52:28 PM UTC-4, Arpit Baldeva wrote:
>>
>> I work in C++ but I think the strategy we have can be adopted in any
>> language.
>&
Hi,
I see an occasional when shutting down my server (version 1.4.2, VS 2015,
Windows).
My set up: I use async api and have 2 threads.
1. Thread 1 processes the tags from the completion queue. It also executes
the shutdown request.
2. Thread 2 plucks the tags from the completion queue and qu
pposed
to be wrong.
Thanks.
On Wednesday, September 20, 2017 at 2:49:53 PM UTC-7, Yang Gao wrote:
>
> Did you destruct the mServer and mCQ before destroying the rpc? If you
> keep either one living after the rpc's are all destroyed, does it still
> crash?
>
>
>
>
is a valid message that got read. If not, you know that there are
> certainly no more messages that can ever be read from this stream.
>
> Client-side Finish: ok should always be true
>
> Server-side AsyncNotifyWhenDone: ok should always be true
>
> HTH!
>
> - Vijay
&
Hi,
Currently, I set the desired configuration of the server before starting up
mServerBuilder.AddChannelArgument(GRPC_ARG_MAX_RECEIVE_MESSAGE_LENGTH,
maxIncomingMessageSize);
I did not see an obvious way to modify these arguments after the server has
started and is running for some time. Is
Hi,
Version - 1.4.2 - Windows - C++.
I noticed some oddities with the server log and want to ensure what I am
seeing is intended and not a bug.
I have multiple completion queues in my server and a thread is dedicated to
block on each completion queue separately. I am seeing debug output from
Hi,
Tested on 1.4.2.
Currently, grpc has 3 logging levels.
GPR_LOG_SEVERITY_DEBUG,
GPR_LOG_SEVERITY_INFO,
GPR_LOG_SEVERITY_ERROR
IMHO, currently GPR_LOG_SEVERITY_INFO logs too much and is unsuitable for
use in prod scenario. INFO to me means that something interesting happened
which is
hat much won't come out on INFO.
> - Vijay
>
> On Wed, Oct 4, 2017 at 3:52 PM Arpit Baldeva > wrote:
>
>> Hi,
>>
>> Tested on 1.4.2.
>>
>> Currently, grpc has 3 logging levels.
>>
>> GPR_LOG_SEVERITY_DEBUG,
>> GPR_LOG_SEVERITY_INF
> On Thursday, October 5, 2017 at 9:29:36 AM UTC-7, Arpit Baldeva wrote:
>>
>> Yes, I do turn on the trace for api (and some other categories) and set
>> the verbosity level to INFO.
>>
>> I thought of the various GRPC_TRACE variables as simply the categories I
&g
Hi,
Any ETA?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups
Check
out
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/grpc-io/k6QFNxWDmv0/AVpTJ3LMEAAJ
to see how we deal with this particular problem (Surfacing bi-directional
streaming error without breaking the stream).
On Monday, October 30, 2017 at 9:44:02 AM UTC-7, matt.m.
Hi,
Does grpc offer any guidelines around the compatibility between grpc and
protobuf? Looks like recently when the 1.6 version bumped the protobuf
support to 3.4.0, it started using a new php namespace option from it
without putting in a version guard. This means that a prior version of
proto
related to C++? (per the subject)
> Also, any clarification on this in a bug on github would be super useful
> too.
>
> Thanks!
>
>
> On Monday, October 30, 2017 at 2:45:23 PM UTC-7, Arpit Baldeva wrote:
>>
>> Hi,
>>
>> Does grpc offer any guidelines aroun
Hi,
Currently, looks like rpc urls are generated in the form of //. Is there a way to prefix the urls such that grpc client
could call //? The receiver can chop
off the prefix and call the correct rpc.
Use case: We'd like to have a proxy sitting between our client and grpc
server. The proxy wi
es not like alternative paths. It is possible,
> just a pain.
>
> On Wed, Nov 1, 2017 at 3:01 PM, Arpit Baldeva > wrote:
>
>> Hi,
>>
>> Currently, looks like rpc urls are generated in the form of /> name>/. Is there a way to prefix the urls such that grpc client
&
We use google.rpc.Status for such use cases (each response contains one) -
https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto.
On Tuesday, December 12, 2017 at 8:24:15 AM UTC-8, pavol.o...@gmail.com
wrote:
>
> Hello,
>
> Two of our teams independently defined several R
Hi,
Is it a known issue that grpc_cli does not support enums?
I have tried a few variations:
* grpc_cli type 127.0.0.1:50051 Greeter.GreetingType
* grpc_cli type 127.0.0.1:50051 .Greeter.GreetingType
etc.My finding is also consistent
with
https://www.goheroe.org/2017/08/19/grpc-service-discove
Hi,
Currently, when the ServerBuilder::BuildAndStart is called, it unilaterally
goes ahead and binds all the plugins. In server_builder.cc,
for (auto plugin = plugins_.begin(); plugin != plugins_.end(); plugin++) {
(*plugin)->InitServer(initializer);
}
et al.
Is it possible to add an
Hi,
I have looked at the gRFC and not too sure how it fixes some of the current
issues with library shutdown (you referenced the issues I mentioned on
GitHub/private conversations).
First off, I don't know about other users but I was already forced to use
grpc_init/shutdown in my code (so I a
ing the ServerContext (rather than just shutdown the server).
> If that is not the case, then I think my previous thought was not right.
>
>
>
> On Wed, Dec 20, 2017 at 11:33 AM, Arpit Baldeva > wrote:
>
>> Hi,
>>
>> I have looked at the gRFC and not too sur
ne which client shall handle
> this event and send message to that specific client.
>
> I don't seem to find these details from Context ... any help would be great
>
>
> On Tuesday, September 12, 2017 at 12:23:48 PM UTC-7, Arpit Baldeva wrote:
>>
>> Hi,
>>
With the information available from context.Conetext() I was not able to
> distinguish different clients from the different/same remote.
>
> On Sat, Jan 6, 2018 at 1:42 PM Arpit Baldeva > wrote:
>
>> My question was mainly around how to make sure the client network
>>
Client IP is not a reliable way to keep track of client. For example, you
could have many clients sitting behind a proxy and they will share the ip
address. I only use it for logging purpose. So depends on your use case.
On Mon, Jan 8, 2018 at 4:58 PM, 'Menghan Li' via grpc.io <
grpc-io@googlegrou
Hi,
Any feedback on this?
Thanks.
On Thursday, January 4, 2018 at 10:05:56 AM UTC-8, Arpit Baldeva wrote:
>
> Hi Yang,
>
> Sorry for the delayed reply. I was on vacation.
>
> Let me restate the problem very simply - Currently, I can't call
> grpc::Server::Shutdown
Hi,
Any idea what is above field used for? For a server, it's own ssl cert for
any client making request to it is specified via pem_key_cert_pairs.
And SslServerCredentialsOptions should not be used when your application is
acting as a client, grpc::SslCredentialsOptions should be used (which h
What you need to use is the GRPC_ARG_MAX_CONNECTION_IDLE_MS option.
However, that option is currently buggy. Before 1.9.0, it could cause a
crash and starting 1.9.0, it could cause memory leak if enabled. See this
issue - https://github.com/grpc/grpc/pull/13594
On Saturday, February 10, 2018 a
Hi,
What does a grpc pre-release mean? For example, what level of testing does
it go through vs a release that is not marked pre-release? Can the api
change between a pre-release vs release?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group
Hi,
Looks like C core added cert reload support
(https://github.com/grpc/grpc/pull/12644) but C++ api does not expose the
functionality? Am I missing something here or this is in the works?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
Any information on this?
Thanks.
On Thursday, March 1, 2018 at 3:41:58 PM UTC-8, Arpit Baldeva wrote:
>
> Hi,
>
> Looks like C core added cert reload support (
> https://github.com/grpc/grpc/pull/12644) but C++ api does not expose the
> functionality? Am I missing something
You say micro-services and then say your server will have 300-400 rpcs? Are
they part of the same service or many independent services and you are just
trying to get a common framework together?
I had a similar problem in my application which is/was largely a monolith
and I had to add gRPC supp
#x27;m working on finding developer time to
> work on this, but currently do not have an ETA to provide to you.
>
> Justin
>
>
> On Thu, Mar 8, 2018 at 10:43 AM, Arpit Baldeva > wrote:
>
>> Any information on this?
>>
>> Thanks.
>>
>>
Check out the example I added
at https://groups.google.com/forum/#!topic/grpc-io/T9u2TejYVTc
As for 300-400 rpcs, you can write a custom code generator that plugs in to
ProtoC (much like grpc_cpp_plugin) and have it generate additional code
that you may need (like auto "requesting" your server
Hi,
I recently pinned down a sporadic race condition in my application due to
grpc intializing OpenSSL internally. The problem is that OpenSSL has some
global callbacks that grpc is trying to initialize on it's own without the
authorization of the application. The problem is in the
init_openss
Arpit,
>
> grpc_init initializes OpenSSL for a short period (~2 days) and the code
> was later removed. Do you still the problem, if you fetch the latest master?
>
> On Monday, April 16, 2018 at 2:22:32 PM UTC-7, Arpit Baldeva wrote:
>>
>> Hi,
>>
>> I recentl
ril 18, 2018 at 2:46:28 PM UTC-7, Arpit Baldeva wrote:
>>
>> I am using grpc-1.10.0 and it has that code. Looking at the latest
>> master, it still has that code -
>> https://github.com/grpc/grpc/blob/master/src/core/tsi/ssl_transport_security.cc
>>
>> - see the
in your application to make sure
> SSL init is not called simultaneously.
>
> On Wed, Apr 18, 2018 at 3:28 PM Arpit Baldeva > wrote:
>
>>
>> Yes, there are two parallel threads that do this at the same time. What I
>> was noticing is that at application shutdown
looks like clean up calls are getting
no-op in future -
https://stackoverflow.com/questions/35802643/will-ignoring-to-call-openssl-evp-cleanup-result-in-serious-flaws-or-memory-leak
) .
On Wednesday, April 18, 2018 at 4:09:01 PM UTC-7, Arpit Baldeva wrote:
>
> Again, I am not sure
ch problems on OpenSSL init.
>
> For OpenSSL 1.0x, it is a valid concern. Let me check what is the best way
> to resolve this issue (pass a compiler flag, environment variable, or some
> API changes).
>
> Thanks,
> Jiangtao
>
>
> On Wed, Apr 18, 2018 at 7:34 PM Arpit
problem.
>
> On Friday, April 20, 2018 at 9:30:34 AM UTC-7, Arpit Baldeva wrote:
>>
>> I am on 1.0.2k so yeah it is a problem on that version.
>>
>> I think the simplest fix is what I mentioned in last email - grpc
>> init_openssl implementation can chec
>>> Good to know. Once the patch approved and merged. It will be in next grpc
>>> release.
>>>
>>>
>>> Thanks,
>>> Jiangtao
>>>
>>>
>>>> On Mon, Apr 23, 2018 at 2:10 PM Arpit Baldeva wrote:
>>>> Thank
The code you have on server side looks correct to me(I have pretty much the
same code).
Have you loaded the root cert for the server on the client (the CA that
issued the cert to the server)? On client side, code could look like:
std::string rootCerts;
readSSLFil
safe. And it is not easy to guarantee that this won't happen if
> grpc is doing it under the covers.
>
> On Mon, Apr 23, 2018 at 10:43 PM, Arpit Baldeva
> wrote:
>
>> Grpc does not un-initialize OpenSSL. If you have other thread that is
>> un-initing it, you can easily
typically aren't threadsafe.
>
>
> > Still as the calls are idempotent, this thread synchronization is an
> easy problem.
>
> If you know every detail of every call -- yes, you can add synchronization
> in all spots. Problem is that when you use 3rdparty lib that uses
thing, but then others don't. Or is that field
> only used when I am setting the SSL options for a client?
>
> Thanks,
> Todd
>
> On Tuesday, April 24, 2018 at 12:29:10 PM UTC-7, Arpit Baldeva wrote:
>>
>> The code you have on server side looks correct to me(
There are many scenarios when application would want to see custom error
codes even if just for logging/better visibility. The pattern I use:
1. Stick a google::rpc::status in every response message.
2. Create a Message with embedded enums next to the Service definition in
the proto file. Each e
Hi,
Previous discussion here
-
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/grpc-io/qOJaIoIzAu0/jdN9VYFLAAAJ
It seems like there is a discrepancy in the way grpc shutdown API works
between Java and C++. From Java docs,
shutdown()
Initiates an orderly shutdown in
Hi,
Can you comment on the status of the C/C++ port? I searched on the forum
for recent conversations but nothing useful turned up.
Thanks.
On Monday, February 5, 2018 at 2:32:25 PM UTC-8, Carl Mastrangelo wrote:
>
> This has been merged, and is being implemented actively in Java and Go,
> a
Hi,
Based on my previous knowledge and reference, the AsyncNotifyWhenDone tag
added by ServerContext::AsyncNotifyWhenDone should be received by the
application when Server::Shutdown is called. *This is supposed to be the
case for the rpcs that have been queued up for processing but not started
Okay, looks like this bug is officially open
at https://github.com/grpc/grpc/issues/10136 . It has been open for a
while. Are there any plans on fixing this?
On Wednesday, August 1, 2018 at 11:48:59 AM UTC-7, Arpit Baldeva wrote:
>
> Hi,
>
> Based on my previous knowledge and re
Reattached -
On Sat, Sep 8, 2018 at 4:57 AM Arthur Wang wrote:
>
> Hi Arpit :
>
> Can't view or download your example code . Is that because far too early
> from now ? Where else can I view them for now ?
>
> Thanks a lot.
>
>
> On Thursday, March 23, 2017
Feel free to take a look at this thread
- https://groups.google.com/d/topic/grpc-io/T9u2TejYVTc/discussion
I attached a C++ implementation of the RouteGuide Async server there. That
code avoids lot of boiler plate and integrates nicely with any threading
architecture you want. The code was wri
or(because we can't call a pure virtual from
> a ctor).
>
> With this, HandleRpcs basically contains only 2 lines:
> - cq_->Next(&tag, &ok)
> - static_cast(tag)->Proceed();
>
> On Monday, October 15, 2018 at 11:05:54 AM UTC+2, Stephan Menzel wrote:
&
You should be looking to run the grpc server in the async mode. That'd make
sure that there is not a thread per server streaming rpc (you control the
threading model).
On Sunday, October 21, 2018 at 4:48:24 AM UTC-7, Michael Martin wrote:
>
> Hello,
> I choose grpc to replace a REST data interfa
88 matches
Mail list logo