Yes. If you want to abort the stream, just cancel it with TryCancel.
On Sunday, July 12, 2020 at 11:19:16 PM UTC-7 afshi...@gmail.com wrote:
>
> Hi.
>
> Let's assume that I can written an asynchronous grpc client that send a
> stream and gets a response. This situation uses `ClientAsyncWriter`
This seems like some kind of memory corruption. I suggest you run the
program under some memory tools such as AddressSanitizer.
On Wednesday, May 13, 2020 at 3:04:06 AM UTC-7 deepankar wrote:
> I am using grpc for streaming audio streams. My grpc client runs fine most
> of the time, but
It is not clear which thread you are talking about and your platform.
gRPC core does set thread names if applicable. For example the sync server
will set its threads with name "grpcpp_sync_server" in thread_manager.cc.
For linux, the name is set to the pthread if GPR_LINUX_PTHREAD_NAME macro
You can turn on some debug tracing with environment variable GRPC_TRACE,
for example setting it to "http,secure_endpoint,tcp". For a complete list,
refer to:
https://github.com/grpc/grpc/blob/master/doc/environment_variables.md
On Saturday, October 26, 2019 at 11:24:07 AM UTC-7
Channels are relatively heavier weight and thus it could be a good idea to
not create a lot of them (unless you hit some throughput bottleneck). Stubs
are pretty cheap.
Channels and stubs are all thread-safe.
On Tue, Nov 12, 2019 at 1:22 PM wrote:
> Sorry, i meant "Is it better to also create
It is safe to call Shutdown more than once.
On Tuesday, August 20, 2019 at 3:18:52 PM UTC-7, Jeff wrote:
>
> If shutdown is called on a server more than once, is the behavior defined?
> Is it safe to do this or is there a chance of a crash?
--
You received this message because you are
That is like a C++ virtual call and there would be a method on each
handshaker. For example, the one you found is one of them.
Not every handshaker is registered/run for every channel. You can run your
client with GRPC_TRACE=handshaker GRPC_VERBOSITY=DEBUG and check the output
log to see what
grpc completion queue will return all the tags you give it. To avoid the
case you talk about, you will need to make sure you drain the completion
queue before destroying it.
On Wednesday, July 24, 2019 at 6:43:47 PM UTC-7, zhju...@gmail.com wrote:
>
> Hi,
>
> I'm trying to write an async grpc
You can call a grpc::Server::Shutdown with a timeout, after the timeout the
pending rpcs will be cancelled. In your rpc handler (if you have long-lived
streaming or rpcs), you will need to check ServerContext::isCancelled to
finish up the rpcs. The sync server will not finish cleanup before you
Other than Cq, grpc creates some internal threads to handle some background
work. I guess it is possible that what you observed is that some work is
offloaded to the background executor.
On Friday, August 2, 2019 at 11:57:42 PM UTC-7, Arthur Wang wrote:
>
> Hi all:
>
>I know that the
(adding back grpc-io)
The client sends the relative timeout value (
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md) over the
wire to the server. The server receives the value and thus knows the
timeout. The server can then set a timer to fire after the timeout to
notify the rpc is
I think you can consider then read-only. The buffer is usually not changed
in-place because you do not know whether you are the only owner that holds
the ref.
On Wednesday, June 26, 2019 at 9:22:21 AM UTC-7, Mayank Narula wrote:
>
> Hi folks
>
> I am referring to this definition -
> /// 1.
The error message means the server could not parse the proto message. I
would suggest you look into 1. whether server uses the same proto file with
the same definition of the message 2. what is in the "ipc" field for
failure case.
On Tuesday, June 18, 2019 at 5:51:40 AM UTC-7, 윤석영 wrote:
>
>
If the server returns an error status, the read/write calls on the client
side will fail and the status can be obtained via Finish.
I do not think there is another way to do it.
On Monday, May 13, 2019 at 7:16:47 AM UTC-7, dixi@gmail.com wrote:
>
>
> Hi,
> I have gRPC server with
This is documented in
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md
I do not think there is an API to change it.
On Wednesday, May 1, 2019 at 2:45:22 PM UTC-7, david...@xnor.ai wrote:
>
> [Using C++ bindings.]
>
> I was surprised to learn the hard way today that the gRPC URL
I guess you are asking about synchronous server API. Unfortunately, there
is no arena or general allocator support in a sync server.
There are a couple of ways to workaround it if you really want to use arena:
1. You can switch to use an async server, where your application would
control the
is your client and server using the same proto? Can you collect the client
side trace as well? From the server log it does not seem like using ssl is
related.
On Thursday, September 13, 2018 at 5:44:22 PM UTC-7, solomon lifshits wrote:
>
> Thank you for reply!
> No, the error message is empty.
The first thing I would check is whether server_ == nullptr after
BuildAndStart.
On Mon, Sep 10, 2018 at 3:22 PM Alistair Lowe wrote:
> Hi guys,
>
> I'm trying to implement my own gRPC 1.14.2 C++ async server loosely
> following the hello world examples, however I receive a segfault when
>
It sounds like the problem is similar to #15889 as you mentioned. We should
continue the discussion in the issue.
On Wednesday, July 18, 2018 at 10:47:51 AM UTC-7, banshe...@googlemail.com
wrote:
>
> In principle, yes. I also already compiled a minimal reproducing example.
> However I found
You are right. Currently there is no support for mocking server side sync
reader/writer objects.. and it can only be tested with a real client and
grpc library.
On Friday, July 6, 2018 at 5:23:36 AM UTC-7, Alex Shaver wrote:
>
> When I have a service that has a simple unary interaction, writing
My understanding is that you register the service with a host string to
match a client using the corresponding authority
for example
registerService("other_service.myapi.com", service)
and the client to use
client_context.set_authority("other_service.myapi.com");
and then makes the rpc.
At
I am afraid there is no easy way to do this now.
In some tests we explicitly override the definition of "now". This might be
able to achieve what you want, but usually it is used for something lower
level.
See bdp_estimator_test.cc for an example.
On Friday, March 16, 2018 at 12:04:31 AM
Maybe you are over simplifying the code but the snippet does not look
correct to me.
1. Your Read call is async, meaning it is more like StartRead and tell me
via tag when there is something. So it does not make sense to log the args
on its return.
2. You request a MdtDialOut but you do not seem
Hi Arpit,
This discussion is not really related to this gRFC now, as it does not seem
to be related to grpc_init/shutdown issue.
If I recall correctly, what you did is correct. Let's say you have a thread
pool driving the ServerCompletionQueue's to handle events for the async
rpc, and in the
I do not think we support that. Actually I think tokens are intentionally
dropped if the channel is not secure to avoid leaking the tokens over the
wire.
On Wed, Feb 7, 2018 at 3:04 PM, wrote:
> I was able to get this working by simply calling AddMetadata on the client
Hi,
We will need some more details to understand what is going on. But I am not
aware of any leaking in stub and channel.
You can either provide some sort of repro or some debug information to show
that memory has been leaked.
Thanks.
On Tue, Jan 30, 2018 at 10:11 PM, wrote:
Maybe you can use something like ServerBuilderSyncPluginDisabler in
async_end2end_test.cc to remove the plugin for those servers.
On Wednesday, December 13, 2017 at 4:41:32 PM UTC-8, Arpit Baldeva wrote:
>
> Hi,
>
> Currently, when the ServerBuilder::BuildAndStart is called, it
> unilaterally
You can set a deadline to your stream. Re-issuing the rpc is not really a
problem because a failed read at the client means the rpc is done at the
server or failed somehow anyway.
Or you can configure client side channel keep alive by adding channel
arguments: GRPC_ARG_KEEPALIVE_TIME_MS and
Do you mean you created 9 threads or grpc created 9 threads?
I assume you are working on an async grpc server? Are those threads your
rpc handling threads?
grpc creates some threads internally to do some work offload.
On Tuesday, November 28, 2017 at 2:19:28 AM UTC-8, shikhach...@gmail.com
For the first question, try to use 0.0.0.0:8889
at https://github.com/Rhysol/GrpcDemo/blob/master/server/main.cpp#L8
For the second question, your server handles a single rpc at a time. All
the other rpcs are waiting at the server to be picked up. Maybe that is the
reason why you see memory
gRPC core will start some threads to do background work (executor thread)
and handle timers (timer thread).
On Tuesday, November 7, 2017 at 12:38:56 PM UTC-8, Maysam Mehraban wrote:
>
>
> Hello,
>
> I am looking into gRPC in async mode to see if it is suitable for the
> application that I am
Yes. A ServerCompletionQueue is a CompletionQueue and you can use it for
client events as well.
On Wednesday, November 1, 2017 at 2:01:25 AM UTC-7, Ista Ranjan Samanta
wrote:
>
> Hi,
>
> Thanks a lot for providing the accomplish able approach towards it. Yes, I
> am also using C++ gRPC.
>
> I
If your stream is finished, either way should be fine.
To be on the safe side, destroy the reader/writer objects first and
Client/ServerContext afterwards.
On Thursday, September 7, 2017 at 10:59:47 AM UTC-7, yihao yang wrote:
>
> Hi, all,
>
> I have a question about the destruction order:
> 1.
Maybe try defining TSI_OPENSSL_ALPN_SUPPORT=0 and see whether it can fall
back to NPN?
On Monday, July 24, 2017 at 2:23:58 PM UTC-7, micha...@nauto.com wrote:
>
> My go client is set up like this and works correctly so I believe the
> server and ELB are set up correctly:
>
> creds :=
Usually we have a loop calling Next and when we are sure there will be no
more work adding to the completion queue we call completion queue's
Shutdown method to shutdown the queue.
It will in turn cause the Next to return false and it can be used to break
out of the loop.
The ok parameter is
There are currently non-public API under src/cpp/common/ for channel
filters. You can try that out. I think its current form is not ready for
moving to public headers.
On Tuesday, July 25, 2017 at 4:09:15 PM UTC-7, Tudor Bosman wrote:
>
> Thanks!
>
> How about interceptors?
>
> Thanks,
>
It is not clear to me what you meant by post an async rpc request to a cq.
However, the user of a cq needs to guarantee that no new event is added to
the cq after a shutdown is called. As a result, you most likely need some
of your own synchronization.
Thanks.
On Thursday, July 27, 2017 at
Can you give an example of the workflow that triggers the problem?
We try to hide the initialization behind the creation of some high level
C++ objects so that users should not need to worry about explicit
initialization.
Thanks.
On Sunday, June 25, 2017 at 10:45:54 PM UTC-7,
stream is part of your API definition while sync/async is about
implementation.
You should decide whether to have "stream" according to whether you can
have more than one reply messages.
As you said, whether to use async or sync implementation is about your
resource limitations. If you have a
You can make fling_client fling_server and look at fling_test.c to see how
to run them manually.
On Tue, Jun 13, 2017 at 9:02 AM, Rajarshi Biswas wrote:
> Hi Guys,
>
> So I figured I have to make this using bazel ? But I get the following
> error now.
>
> bazel
On the client side, I do not think you will have a way to completely
disable checking. The closest you can get is to use
ChannelArguments::SetSslTargetNameOverride to set the proper name from the
server side cert.
On Tue, Jun 6, 2017 at 3:05 PM, wrote:
> Sorry for the delay
On Wed, Apr 26, 2017 at 9:25 AM, Eric Anderson <ej...@google.com> wrote:
> I assumed this is Go, given lis.Accept().
>
> On Tue, Apr 25, 2017 at 5:14 PM, 'Yang Gao' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> What language are you using?
>>
>
What language are you using?
On Tuesday, April 18, 2017 at 4:30:24 PM UTC-7, Steven Jiang wrote:
>
> grpc server is listening on a port and accepting connection request from
> grpc clients. lis.Accept() is called inside grpc lib. If I'd like to set
> TOS bits of the new accepted connection,
Ilina
You may want to test by setting channel_arg with key
grpc.testing.fixed_reconnect_backoff_ms to a smaller value to see whether
it works. Or maybe playing with the values in client_channel/subchannel.c.
I do not think we have a arg for the min_backoff right now though.
On Tue, Feb 28, 2017
What kind of error are you seeing?
Are there any error logs printed?
On Tuesday, October 18, 2016 at 1:16:27 AM UTC-7, Eugene Abramov wrote:
>
> Hello,
>
> If the file application_default_credentials.json is missing the function
> grpc::GoogleDefaultCredentials fails with a critical error. How
Hi Min,
The zookeeper resolver code was not properly tested and caused us problems
occasionally, and thus it was removed.
If you are using that and have a tested implementation, we would be happy
to take pull requests :)
Thanks.
On Wed, Nov 2, 2016 at 5:24 PM, Min Yao
Hi, The stub can be shared by different calls, but the memory is freed in
https://github.com/grpc/grpc/blob/master/src/core/lib/surface/channel.c#L311
You can try to define GRPC_STREAM_REFCOUNT_DEBUG to have more logs.
On Tue, Sep 27, 2016 at 5:17 PM, wrote:
> I am tracking
You are talking to port 443 and using Insecure credentials (clear text).
Should you use ssl credentials at least?
On Monday, September 5, 2016 at 12:03:25 AM UTC-7, balad...@gmail.com wrote:
>
> Hello everyone!
>
> I've written some code in attempt to stream mic audio input from a
> separate
You may want to try ServerContext::AsyncNotifyWhenDone.
On Wed, Aug 17, 2016 at 2:51 AM, Chaitanya Gangwar <
chaitanyagang...@gmail.com> wrote:
> Hi,
>
> I have Async streaming server implemented in C++.
> I want to know is there any way to detect on server side when a client
> disconnects so
49 matches
Mail list logo