[grpc-io] Re: Communication of grpc streaming between C# and C++

2021-01-20 Thread 'AJ Heller' via grpc.io
Hi Zijian. Basic streaming communications should work fine between 
languages. Can you be more specific about your errors, or produce a minimum 
reproducible example?
On Thursday, January 14, 2021 at 5:59:36 AM UTC-8 Zijian Han wrote:

> Hello, 
>
> I'm developing a grpc service, with image streaming, server in c# and 
> client in c++,
>
> Can i use grpc api to communicate on both side directly?
>
> Because if i write the async streaming server in c# and async client in 
> c++ i found out they cannot be connected.
>
> Is there any solution to solve this?
>
> BR
>
> Zijian
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6e6ea02d-b6ad-4c28-a4e4-276f6976b88bn%40googlegroups.com.


[grpc-io] gRFC L82: gRPC Core EventEngine API

2021-06-21 Thread 'AJ Heller' via grpc.io
Please review and comment! The gRFC is 
at https://github.com/grpc/proposal/pull/245. 

This work replaces gRPC Core's iomgr with a public interface for custom, 
pluggable implementations which we're calling EventEngines. EventEngines 
are tasked with providing all cross-platform I/O, task execution, and DNS 
resolution functionality for gRPC Core and its wrapped languages. This 
public API will make it easier to integrate gRPC into external event loops, 
it will eventually allow siloing events between gRPC channels and servers, 
and it will provide another way to support the C++ Callback API.

Cheers,
-aj

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e62c7576-fab5-4fc1-99a0-984279fc82a4n%40googlegroups.com.


[grpc-io] Re: How to unit test grpc services api in ci/cd

2021-05-05 Thread 'AJ Heller' via grpc.io
Hi Sunandan. For unit tests, it's first worth trying to test your business 
logic in isolation from gRPC. For integration tests, you could run your 
service and exercise it with test clients (your CI environment may be 
opinionated on how to automate that). We also have a doc that offers one 
suggestion as to how you might test client 
logic https://github.com/grpc/grpc/blob/master/doc/unit_testing.md
On Sunday, May 2, 2021 at 9:10:10 AM UTC-7 sunanda...@gmail.com wrote:

> Hi,
> I am using unit test framework like catch2. Can any one suggest how to 
> test my grpc service APIs.
> Regards,
> Sunandan 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/50915e66-eee2-4ed5-86ee-b9b8eed8616cn%40googlegroups.com.


[grpc-io] Re: gRFC L79: C++ API changes on ByteBuffer and Slice

2021-04-19 Thread 'AJ Heller' via grpc.io
Corrected link: https://github.com/grpc/proposal/pull/232.  (previous link 
pointed to #215)
On Monday, April 19, 2021 at 4:24:39 PM UTC-7 veb...@google.com wrote:

> Hi all,
>
> I've created a gRFC for C++ API changes on ByteBuffer and Slice. The 
> proposal is here:
> https://github.com/grpc/proposal/pull/232 
> 
>
> Please feel free to comment.
>
> Regards,
> Esun.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d2f2c19c-6b3d-4bf5-b4d3-ac53a7866ab7n%40googlegroups.com.


Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-13 Thread 'AJ Heller' via grpc.io
Absolutely, I'll reach out when the EventEngine integration is a bit more 
feasible. As Mark said, getting to a point where gRPC is ready for custom 
EventEngine integration is maybe 6 months away.

The EventEngine API is public, you can review and comment on it 
here: https://github.com/grpc/proposal/pull/245. It's an experimental API 
that is still undergoing changes, since we're building our first 
implementations and working out performance problems, etc. We're doing all 
of this development in the open, so there's no internal channel or early 
release to offer you, but hopefully this is a good start.
On Monday, September 13, 2021 at 8:12:54 AM UTC-7 Mark D. Roth wrote:

> (Adding AJ, who's driving the EventEngine effort.)
>
> AJ, it looks like Sureshbabu wants to be an early tester of the new 
> EventEngine code on Windows.  Please coordinate with him when we get to a 
> point where the new code is actually ready for testing (specifically the 
> client-side endpoint code).
>
> On Sun, Sep 12, 2021 at 9:40 PM Sureshbabu Seshadri  
> wrote:
>
>> Thanks Mark for your support. We may now forced to wait for the redesign 
>> task you have mentioned. BTW is there way to get that offline (though its 
>> not officially pushed into GRPC git hub) to check whether it helps our 
>> scenario.
>>
>> On Wednesday, September 8, 2021 at 10:41:15 PM UTC+5:30 Mark D. Roth 
>> wrote:
>>
>>> It sounds like this is a Windows-specific problem, which unfortunately 
>>> means that we probably can't help you much in the short term, since we 
>>> don't have any spare cycles to focus on Windows-specific performance.
>>>
>>> As I mentioned earlier, the Windows-specific TCP code in gRPC will be 
>>> replaced by the new EventEngine implementation, probably within the next 6 
>>> months or so, so if the problem is in our current Windows TCP code, then 
>>> that might fix it, although you'd have to wait for that change and then 
>>> test it to see if it helps.
>>>
>>> Of course, gRPC is open-source, so you're welcome to take a look at the 
>>> code and try to fix this yourself.  We'd be happy to accept patches.
>>>
>>> One other experiment that you might try is to increase the payload size 
>>> of each RPC such that the requests are larger than the TCP MSS.  That would 
>>> force the server to send a TCP ACK immediately, rather than potentially 
>>> delaying.  But I don't know if this would help, since you already said that 
>>> you aren't seeing this problem with the legacy CORBA code (although I don't 
>>> know what the wire protocol looks like for that, so maybe it's larger?).
>>>
>>> I'm sorry that we can't be of more immediate help here.  Good luck!
>>>
>>>
>>> On Wed, Sep 8, 2021 at 2:36 AM Sureshbabu Seshadri  
>>> wrote:
>>>
 Thanks Mark for some more details. 

 Our target environment is *windows client and Linux Server *and hence 
 executed samples in similar fashion. Now as per your request executed same 
 sample in Linux client and the performance is very good. 1000 RPCs are 
 finished *within 1 second (about 500 ms).* Here is the requested log 
 with TCP enabled


 https://drive.google.com/file/d/1BmgDip5zPUHAiI9VUCrpXPhfe7OXWNnc/view?usp=sharing

 *Regarding Parallizing, *our  software is 20 years old and we have 
 just changed IPC layer from CORBA to GRPC and observed this slowness. 
 Parallelizing is also not possible as there are requests to be processed 
 in 
 client end before executing next RPC call. There are some use cases 
 200-300 
 different RPCs are executed and in some use cases, few RPCs are repeatedly 
 called and eventually ending in slowness as each RPC call has signifcant 
 degradation compare to our old SW (CORBA).

 One additional point might be interesting to you is, even in windows 
 client when i am executing sample application of calling 1000 RPCs, at 
 sometiimes the performance is good say less than 2 seconds, but majority 
 of 
 execution ending up in slowness about 8-9 seconds.

 *Additional Question about Windows grpc library building procedure*, 
 we are using 64bit Release mode for this. Do you see any scope to optimize 
 GRPC library for better performance by adding some build parameters. The 
 one used is the default procedure mentioned in GRPC site.

 On Tuesday, September 7, 2021 at 10:18:38 PM UTC+5:30 Mark D. Roth 
 wrote:

> Thanks, that's helpful.
>
> From the trace, it looks like you're running on Windows.  Most of our 
> performance efforts have been focused on Linux, not Windows, so it may be 
> that this is just an inefficiency in gRPC's Windows TCP code.  Can you 
> run 
> the client on a Linux machine to see if it makes a difference?  I'd be 
> interested in seeing the log with the same env vars on Linux to compare 
> with the log you've just sent.
>
> One potential problem shown by the trace is 

[grpc-io] Re: gRPC executor threads and timer thread

2021-07-12 Thread 'AJ Heller' via grpc.io
We're adding a new API to gRPC core for exactly these kinds of situations. 
Please see https://github.com/grpc/proposal/pull/245 for information on the 
EventEngine API, I'd appreciate your feedback! In short, if you need 
fine-grained control over threading/eventing behaviors in gRPC, or want to 
hook into an external event loop, writing a custom EventEngine will give 
you that control. Please note this is currently experimental, the team is 
still working on the reference implementation, and the API will likely 
undergo some changes in the coming months.

Cheers,
-aj
On Thursday, July 8, 2021 at 12:23:43 PM UTC-7 yas...@google.com wrote:

>
> 1. What is the purpose of the timer thread?
> Throughout the gRPC stack, there are a bunch of deadlines and timeouts 
> that need to be tracked. The way gRPC Core does this is through timers. It 
> schedules a closure to be executed when that timer expires and this closure 
> is run on the timer thread.
>
> 2. Should everything Just Work™ even if we call 
> `grpc_timer_manager_set_threading(false)`
> No, it won't. :)
> On Tuesday, June 29, 2021 at 11:00:36 AM UTC-7 Jonathan Basseri wrote:
>
>> The context, following from our previous thread, is that we want to add 
>> grpc endpoints to an existing high-performance application. Our application 
>> already has extensive control over the allocations and threading on the 
>> system, so *we would prefer a single-threaded grpc server* that hands 
>> off async requests to our own work queue.
>>
>> All of the above seems to be working in Alex's prototype, but we want to 
>> make sure that stopping these threads is not going to cause problems down 
>> the line.
>>
>> 1. What is the purpose of the timer thread?
>> 2. Should everything Just Work™ even if we call 
>> `grpc_timer_manager_set_threading(false)`
>>
>> Thanks,
>> Jonathan
>>
>> On Monday, June 28, 2021 at 9:19:31 PM UTC-7 Alex Zuo wrote:
>>
>>> For executor threads, we can use Executor::SetThreadingAll(false) to 
>>> shut down. If there is no thread, it still works according to the following 
>>> code.
>>>
>>> void Executor::Enqueue(grpc_closure* closure, grpc_error_handle error,
>>> bool is_short)
>>> ... 
>>> do {
>>> retry_push = false;
>>> size_t cur_thread_count =
>>> static_cast(gpr_atm_acq_load(_threads_));
>>>
>>> * // If the number of threads is zero(i.e either the executor is not 
>>> threaded*
>>> * // or already shutdown), then queue the closure on the exec context 
>>> itself*
>>> *if (cur_thread_count == 0) {*
>>> #ifndef NDEBUG
>>> EXECUTOR_TRACE("(%s) schedule %p (created %s:%d) inline", name_, closure,
>>> closure->file_created, closure->line_created);
>>> #else
>>> EXECUTOR_TRACE("(%s) schedule %p inline", name_, closure);
>>> #endif
>>> grpc_closure_list_append(grpc_core::ExecCtx::Get()->closure_list(),
>>> closure, error);
>>> return;
>>> }
>>>
>>> For the timer thread, there is a function to shut it down. However I 
>>> cannot tell what is the impact if there is no such a thread. I also don't 
>>> know the timer is used.
>>>
>>> void grpc_timer_manager_set_threading(bool enabled);
>>>
>>> Anybody has any insight? 
>>>
>>> Thanks,
>>> Alex
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3f0c43a4-b25e-4674-9c29-6deee0771fb7n%40googlegroups.com.


[grpc-io] Re: What's the Threading Model behind Completion Queue?

2021-07-28 Thread 'AJ Heller' via grpc.io
Hi Lixin. Good questions! I can offer a high-level summary.

> I'm wondering what's the threading model behind the completion queue?

This is a bit of an oversimplification, but the C++ API's `CompletionQueue` 
borrows threads from the application. Work is done when applications make a 
blocking call to `CompletionQueue::Next`. See the API docs here 
https://grpc.github.io/grpc/cpp/classgrpc_1_1_completion_queue.html#a86d9810ced694e50f7987ac90b9f8c1a.

> Who produces items to the completion queue?

Applications do, for the most part. Some of this is covered in the C++ 
Asynchronous-API tutorial: https://grpc.io/docs/languages/cpp/async/. 

> What is between the completion queue and the network?

Quite a few things - the majority of gRPC sits between them. At a high 
level, there's the transport layer, handling things like HTTP/2 and cronet 
transports. Then there are filters that both filter and augment calls, 
adding things like max_age filtering and load balancing for client 
channels. The bottom-most layer is called iomgr, providing things like 
network connectivity and timers.

On Thursday, July 22, 2021 at 11:20:02 PM UTC-7 Lixin Wei wrote:

> I'm wondering what's the threading model behind the completion queue?
>
> Who produces items to the completion queue? What is between the completion 
> queue and the network?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fa0e81e7-f04e-47c6-9ab0-f5edb59e1119n%40googlegroups.com.


[grpc-io] Re: Should I use async API instead of sync one ?

2021-09-21 Thread 'AJ Heller' via grpc.io
Hi Théo. You'll at least want to consider using the async API. 
See https://grpc.io/docs/guides/performance/#c for some handy guidelines. 
The sync API may be fine for fast, non-blocking server-side operations. For 
the streaming method, you've limited your concurrency there a bit, but 
that's not a big deal if you're only ever serving 3 clients concurrently. 
You might run into diminishing returns with such a high thread count, 
paying thread-switch costs for a large number of threads when your service 
is under heavy load. As to whether it's inevitable that you move to the 
async API ... if you have a working solution that performs well enough for 
your needs, and if you know it doesn't need to grow that much, then it may 
be reasonable to keep what you have. It's a judgement call, maybe something 
to benchmark and measure for yourself.
On Friday, September 10, 2021 at 12:43:22 AM UTC-7 Théo Depalle wrote:

>
> Hi everyone,
>
> I just implemented a cpp interface for my software using synchronous gRPC 
> API. Both clients and server will run locally.
>
> I have multiple services with different kind of rpc : 
> - simple RPC to get/set value (fast computation time)
> - server side streaming RPC to subscribe to continuous data informations
>
> The server side rpc are "infinite" as the server sends data at a given 
> frequency until the client disconnects. To not only have this kind of 
> request in my thread pool and block simple rpc requests, I set a maximum 
> client number for each streaming rpc. 
> Finally, I set the number of thread in my thread pool in order to have at 
> least one thread for each request if I have :
> - as many requests as the maximum number of clients for each streaming rpc
> - 2 requests for each simple rpc.
>
> Exemple : Let's consider a server with only one service containing 1 
> server side streaming rpc and 2 simple rpc. The maximum client number for 
> the streaming rpc is 3. I will set my thread pool to 3 + 2 * 2 = 7 threads. 
>
> In normal use case I will have 2-3 clients maximum for each rpc but as I 
> have a total of 18 simple rpc and 5 server side streaming rpc I can have a 
> thread pool between approximately 50 and 70 threads. If my interface 
> evolves I fear that I will have too many threads in my thread pool.
>
> Do you think about other limitations I could have using the synchronous 
> API ? Do you think it is inevitable to move to the async API for this kind 
> of interface ? 
>
> Looking forward to your feedback!
>
> Théo
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/858b84ab-4c71-4ab7-9092-98868cf10b12n%40googlegroups.com.


[grpc-io] Re: gRPC server crash when calling CompletionQueue::AsyncNext and then grpc_byte_buffer_destroy

2021-10-20 Thread 'AJ Heller' via grpc.io
*There have _been_ a handful ...

On Wednesday, October 20, 2021 at 11:08:09 AM UTC-7 AJ Heller wrote:

> There have a handful of crash fixes since January, when 1.35 was released. 
> Can you reproduce this with v1.41? If so, a minrepro would be helpful.
>
> On Wednesday, October 13, 2021 at 4:11:34 AM UTC-7 mykyta@gmail.com 
> wrote:
>
>> Hi, I am using gRPC version 1.35. I am experiencing the following crash 
>> and I believe the problem is on gRPC side. I've found the following page 
>> with similar crash https://github.com/grpc/grpc/issues/23270 . I would 
>> appreciate any suggestions from your side. Thank you! Please find the crash 
>> trace below.
>>
>> #0 0x7fd5afedb387 in raise () from /lib64/libc.so.6 #1 
>> 0x7fd5afedca78 in abort () from /lib64/libc.so.6 #2 0x7fd5aff1df67 
>> in __libc_message () from /lib64/libc.so.6 #3 0x7fd5aff26329 in 
>> _int_free () from /lib64/libc.so.6 #4 0x7fd5b3b306ca in gpr_free () 
>> from /lib64/libgpr.so.14 #5 0x7fd5b5aaabdc in grpc_byte_buffer_destroy 
>> () from /lib64/libgrpc.so.14 #6 0x7fd5b601bc5c in 
>> grpc::CoreCodegen::grpc_byte_buffer_destroy(grpc_byte_buffer*) () from 
>> /lib64/libgrpc++.so.1 #7 0x7fd5b7337e56 in grpc::ByteBuffer::Clear 
>> (this=0x7fd2cd76e490) at /usr/include/grpcpp/impl/codegen/byte_buffer.h:124 
>> #8 grpc::internal::CallOpSendMessage::FinishOp (status=0x7fd5a603c88f, 
>> this=0x7fd2cd76e480) at /usr/include/grpcpp/impl/codegen/call_op_set.h:328 
>> #9 grpc::internal::CallOpSet> grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, 
>> grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, 
>> grpc::internal::CallNoOp<6> >::FinalizeResult (this=0x7fd2cd76e450, 
>> tag=0x7fd5a603c890, status=0x7fd5a603c88f) at 
>> /usr/include/grpcpp/impl/codegen/call_op_set.h:917 #10 0x7fd5b601b703 
>> in grpc::CompletionQueue::AsyncNextInternal(void**, bool*, gpr_timespec) () 
>> from /lib64/libgrpc++.so.1 #11 0x7fd5b7330f32 in 
>> grpc::CompletionQueue::AsyncNext>  
>> std::chrono::duration > > > 
>> (deadline=..., ok=0x7fd5a603c88f, tag=0x7fd5a603c890, this=0x7fd2cd7596f0) 
>> at /usr/include/grpcpp/impl/codegen/time.h:81
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2ba120f1-09b3-4814-a5b0-cc329e25ed36n%40googlegroups.com.


[grpc-io] Re: gRPC server crash when calling CompletionQueue::AsyncNext and then grpc_byte_buffer_destroy

2021-10-20 Thread 'AJ Heller' via grpc.io
There have a handful of crash fixes since January, when 1.35 was released. 
Can you reproduce this with v1.41? If so, a minrepro would be helpful.

On Wednesday, October 13, 2021 at 4:11:34 AM UTC-7 mykyta@gmail.com 
wrote:

> Hi, I am using gRPC version 1.35. I am experiencing the following crash 
> and I believe the problem is on gRPC side. I've found the following page 
> with similar crash https://github.com/grpc/grpc/issues/23270 . I would 
> appreciate any suggestions from your side. Thank you! Please find the crash 
> trace below.
>
> #0 0x7fd5afedb387 in raise () from /lib64/libc.so.6 #1 
> 0x7fd5afedca78 in abort () from /lib64/libc.so.6 #2 0x7fd5aff1df67 
> in __libc_message () from /lib64/libc.so.6 #3 0x7fd5aff26329 in 
> _int_free () from /lib64/libc.so.6 #4 0x7fd5b3b306ca in gpr_free () 
> from /lib64/libgpr.so.14 #5 0x7fd5b5aaabdc in grpc_byte_buffer_destroy 
> () from /lib64/libgrpc.so.14 #6 0x7fd5b601bc5c in 
> grpc::CoreCodegen::grpc_byte_buffer_destroy(grpc_byte_buffer*) () from 
> /lib64/libgrpc++.so.1 #7 0x7fd5b7337e56 in grpc::ByteBuffer::Clear 
> (this=0x7fd2cd76e490) at /usr/include/grpcpp/impl/codegen/byte_buffer.h:124 
> #8 grpc::internal::CallOpSendMessage::FinishOp (status=0x7fd5a603c88f, 
> this=0x7fd2cd76e480) at /usr/include/grpcpp/impl/codegen/call_op_set.h:328 
> #9 grpc::internal::CallOpSet grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, 
> grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, 
> grpc::internal::CallNoOp<6> >::FinalizeResult (this=0x7fd2cd76e450, 
> tag=0x7fd5a603c890, status=0x7fd5a603c88f) at 
> /usr/include/grpcpp/impl/codegen/call_op_set.h:917 #10 0x7fd5b601b703 
> in grpc::CompletionQueue::AsyncNextInternal(void**, bool*, gpr_timespec) () 
> from /lib64/libgrpc++.so.1 #11 0x7fd5b7330f32 in 
> grpc::CompletionQueue::AsyncNext  
> std::chrono::duration > > > 
> (deadline=..., ok=0x7fd5a603c88f, tag=0x7fd5a603c890, this=0x7fd2cd7596f0) 
> at /usr/include/grpcpp/impl/codegen/time.h:81
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/85b07189-be39-4977-bc7e-62f6ed3890a4n%40googlegroups.com.


[grpc-io] Re: grpc c++: how to create async callback client

2021-09-22 Thread 'AJ Heller' via grpc.io
 

You should not need to manage your own threads for basic callback usage, 
that’s one of the callback API’s design goals (
https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md#proposal
).

Take a look at the route_guide example here: 
https://github.com/grpc/grpc/blob/v1.41.x/examples/cpp/route_guide/route_guide_callback_client.cc.
 
It has a server-streaming client for ListFeatures, a client-streaming 
client for RecordRoute, a bidi client for RouteChat, and the recommended 
“shortcut” method of making a client unary call via 
`stub->async()->GetFeature(…)`, where the lambda stands in for a 
ClientUnaryReactor’s OnDone method (
https://github.com/grpc/grpc/blob/v1.41.x/examples/cpp/route_guide/route_guide_callback_client.cc#L304).
 
For streaming operations, gRPC manages threads to call the reactor 
callbacks asynchronously. If the application needs to wait for some 
streaming operation to finish, it *can* block the main thread using 
something like the custom Await method of the ListFeatures Reader, but 
that's entirely optional and up to the application developer.

Hope that helps!
On Tuesday, September 7, 2021 at 2:22:47 PM UTC-7 oleg@idt.net wrote:

> Hello, could you possibly help me with creating async callback client? 
> Should I create UnaryClientReactor per rpc or I should run every rpc in 
> separate thread?
> There cq-based async client example but callback client example looks like 
> sync rpc with thread blocking.
>
> Thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/26dc90da-e003-47fa-bef8-01bb44d88647n%40googlegroups.com.


[grpc-io] Announcement: Possible breaking change to community-supported platforms on gRPC-core

2021-11-03 Thread 'AJ Heller' via grpc.io
Hello gRPC community!

I'll be introducing a change (#27513
) to gRPC-core which may break the
build on some community-supported platforms
.
This affects all of the languages that rely on gRPC-core, including C++,
Python, Ruby, Objective-C, and PHP.

We've taken great care to ensure that the officially-supported platforms
(many versions of Linux, MacOS, Windows), along with some of the platforms
with best-effort support (Android and iOS), are covered by our continuous
integration test suite. These platforms should continue to build and run
happily. The community-supported platforms may or may not continue to
build, we don't have the resources to test them all. We continue to rely
on external contributions to maintain gRPC on these platforms.

If you use gRPC-core on a community-supported platform, please check out
and attempt to build PR #27513 .
The PR does not introduce any functional changes at this time, so a
successful build should be sufficient. Ultimately, we are working towards
having libuv  drive gRPC's low-level I/O operations. It
may be worth testing libuv on your platform as well to ensure core
functionality is working.

As currently planned, this change will land in the gRPC master branch the
week of Nov 8th-12th, and it will likely be included in the v1.43 release
of gRPC in mid-December.

Best regards,

AJ Heller
Software Engineer

h...@google.com

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B8UZUv5aqY%2BLuKy12Af23KHo9KHo%3DAAkvA1%3DLF3U9Rx9AfzTQ%40mail.gmail.com.


[grpc-io] Re: Announcement: Possible breaking change to community-supported platforms on gRPC-core

2022-02-01 Thread 'AJ Heller' via grpc.io
Update on the timeline: we now expect to land this change in the master 
branch in the next few weeks, followed by an official release this coming 
March or April. As stated above, this change will require libuv to build on 
all supported platforms, and we've made an effort to ensure all supported 
platforms will continue to function seamlessly. If you have any questions 
or concerns about this change, please don't hesitate to reach out.

Best regards,
-aj

On Wednesday, November 3, 2021 at 10:43:13 AM UTC-7 AJ Heller wrote:

> Hello gRPC community!
>
> I'll be introducing a change (#27513 
> ) to gRPC-core which may break 
> the build on some community-supported platforms 
> .
>  
> This affects all of the languages that rely on gRPC-core, including C++, 
> Python, Ruby, Objective-C, and PHP.
>
> We've taken great care to ensure that the officially-supported platforms 
> (many versions of Linux, MacOS, Windows), along with some of the platforms 
> with best-effort support (Android and iOS), are covered by our continuous 
> integration test suite. These platforms should continue to build and run 
> happily. The community-supported platforms may or may not continue to 
> build, we don't have the resources to test them all. We continue to rely 
> on external contributions to maintain gRPC on these platforms.
>
> If you use gRPC-core on a community-supported platform, please check out 
> and attempt to build PR #27513 . 
> The PR does not introduce any functional changes at this time, so a 
> successful build should be sufficient. Ultimately, we are working towards 
> having libuv  drive gRPC's low-level I/O operations. 
> It may be worth testing libuv on your platform as well to ensure core 
> functionality is working.
>
> As currently planned, this change will land in the gRPC master branch the 
> week of Nov 8th-12th, and it will likely be included in the v1.43 release 
> of gRPC in mid-December.
>
> Best regards,
>
> AJ Heller
> Software Engineer
>
> ho...@google.com
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e8a5a61d-be70-4216-afe6-fc4cfecd5850n%40googlegroups.com.


[grpc-io] Re: How are bidirectional streams handled for TCP disconnection exceptions in gRPC?

2023-09-13 Thread 'AJ Heller' via grpc.io
This is answered on StackOverflow.

On Saturday, September 9, 2023 at 11:56:35 PM UTC-7 borong wrote:

>
> https://stackoverflow.com/questions/77075070/how-are-bidirectional-streams-handled-for-tcp-disconnection-exceptions-in-grpc
>
>
>
> specific description
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9e59a63f-0e96-4c84-8d99-91100b547572n%40googlegroups.com.


[grpc-io] Re: Handling gRPC-gateway for .NET Core

2023-09-13 Thread 'AJ Heller' via grpc.io
gRPC-gateway is a go project. grpc-web might be able to help you 
https://github.com/grpc/grpc-web

On Tuesday, September 12, 2023 at 10:15:58 PM UTC-7 David CHANE wrote:

> Dear Mrs., Mr ,
>
>  
>
> I am currently progamming an App in gRPC using .NET environment and 
> writing my code in C# language.
>
>  
>
> I want to use gRPC-gateway to still be able to handle REST protocol with 
> clients.
>
>  
>
> However, the only ressources I can see are written in ‘Go’ language. 
>
> Is it possible to do it but in c# or in Visual Studio environment ?
>
>  
>
>  
>
> Here you can find their github page : 
> https://github.com/grpc-ecosystem/grpc-gateway
>
>  
>
>  
>
>  
>
> Thank you for considering my request. I look forward to hear from you.
>
>  
>
>  
>
> Sincerely,
>
>  
>
> David CHANE YOCK NAM,
>
> david...@ioconnect.re 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a2ec117e-6520-4a97-a8a0-60ec7612362fn%40googlegroups.com.


[grpc-io] Re: The following imported targets are referenced, but are missing: absl::any_invocable while trying to use gRPC for a project

2023-09-13 Thread 'AJ Heller' via grpc.io
I believe this was answered in https://github.com/grpc/grpc/issues/34299

On Monday, September 11, 2023 at 2:29:30 AM UTC-7 Abhishek Ghosh wrote:

> The contents of the file /usr/local/lib/cmake/grpc/gRPCConfig.cmake
> # Module path list(APPEND CMAKE_MODULE_PATH 
> ${CMAKE_CURRENT_LIST_DIR}/modules) 
> # Depend packages if(NOT TARGET absl::strings) find_package(absl CONFIG) 
> endif() # Targets include(${CMAKE_CURRENT_LIST_DIR}/gRPCTargets.cmake) if(NOT 
> CMAKE_CROSSCOMPILING) 
> include(${CMAKE_CURRENT_LIST_DIR}/gRPCPluginTargets.cmake) 
> endif() 
> ​
> On Monday, September 11, 2023 at 2:53:30 PM UTC+5:30 Abhishek Ghosh wrote:
>
>> I am using the latest version as of the 10th of September, 2023. I am 
>> using gRPC for C++ on linux.
>> PACKAGE VERSION 1.59.0-dev CORE VERSION 35.0.0 
>>
>> I am trying to set up the code base of a project from Github. The code 
>> base uses gRPC and cmake to establish the dependencies.
>>
>> From the CMakeLists.txt:
>> find_package(gRPC CONFIG REQUIRED) message(STATUS "Using gRPC 
>> ${gRPC_VERSION}") 
>>
>> I think that I have successfully installed gRPC and its required 
>> dependencies. At least, while building gRPC from its source and then trying 
>> to install it (used cmake to make and install the packages) did not find 
>> any issues or errors.
>> $ git clone https://github.com/grpc/grpc/ $ cd grpc $ git submodule 
>> update --init --recursive $ cd cmake $ mkdir build $ cd build $ cmake ../.. 
>> $ make -j`nproc` $ sudo make install 
>>
>> But while trying to build the project which I am trying to setup (the 
>> corresponding CMakeLists.txt file 
>> ),
>>  
>> I get the following error:
>> Call Stack (most recent call first): 
>> /usr/share/cmake-3.25/Modules/ExternalProject.cmake:4185 
>> (_ep_add_download_command) worker/CMakeLists.txt:17 (ExternalProject_Add) 
>> This warning is for project developers. Use -Wno-dev to suppress it. Using 
>> protobuf 24.2.0 CMake Error at worker/serverless_gpu/CMakeLists.txt:16 
>> (find_package): Found package configuration file: 
>> /usr/local/lib/cmake/grpc/gRPCConfig.cmake but it set gRPC_FOUND to FALSE 
>> so package "gRPC" is considered to be NOT FOUND. Reason given by package: 
>> The following imported targets are referenced, but are missing: 
>> absl::any_invocable 
>>
>> I am new to building systems that too using cmake. I am not very 
>> experienced. Can anyone please guide me to fix this issue?
>>
>> I am using Ubuntu 18.04 with 5.4.0-150-generic kernel.
>> ​
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b24ff69a-6d50-42ea-a34e-7d8053289784n%40googlegroups.com.


[grpc-io] Re: How to find boringSSL version in grpcio 1.21.1 ?

2023-09-13 Thread 'AJ Heller' via grpc.io
I'm not entirely sure how to help you with such an old version. I'd 
recommend trying with a more recent gRPC version, we are currently up to 
version 1.58. https://pypi.org/project/grpcio/

On Monday, September 11, 2023 at 3:40:12 AM UTC-7 Reena THOMAS wrote:

> I am downloading tar file from 
> https://files.pythonhosted.org/packages/fb/d5/30bc142a40bb891c28739ec48c99730d20e5fb9cf9637036b4b52f70505b/grpcio-1.21.1.tar.gz
>  
> , and ran "python setup.py install" 
>
> I am unable to find a clear solution to know boringSSL version that is 
> mapped to grpcio 1.21.1
>
> Is there a way to find from source code or from above tar file?
> Any help will be appreciated 
>
> Project: https://pypi.org/project/grpcio/1.21.1/#files
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bdef7c1e-be43-4158-b8d8-8a48ee15f4den%40googlegroups.com.


[grpc-io] Re: gRPC Connections Managment golang

2023-09-13 Thread 'AJ Heller' via grpc.io
I'm not sure what your question here is, exactly. I'd recommend starting 
with this grpc-go tutorial https://grpc.io/docs/languages/go/basics/, which 
will teach you the basics of creating connections and issuing RPCs. You may 
be most interested in bidirectional streaming.

On Saturday, September 9, 2023 at 6:57:57 AM UTC-7 Kareem Adem wrote:

> Hi Buddies,
> I am new to gRPC, and I have Server and multiple clients.
> So I want to Store Each Client Connection in order to be able to send 
> messages to specific Client.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d99e8488-b66b-4dc0-b85c-e7d70ec692een%40googlegroups.com.


[grpc-io] Patch Releases for CVE-2023-4785, covering gRPC Core, C++, Python, and Ruby

2023-09-19 Thread 'AJ Heller' via grpc.io
Patched versions of the affected gRPC libraries have been released to
address CVE-2023-4785 . *Please
deploy patched libraries if all of the following apply to you:*

 * You are using gRPC C++, Python, or Ruby.
 * You are running a gRPC Server in one of those languages.
 * You are using an unpatched version of the gRPC library.

The following set of releases contain the fix:

 * 1.57.0 and later: https://github.com/grpc/grpc/releases/tag/v1.57.0
 * 1.56.2: https://github.com/grpc/grpc/releases/tag/v1.56.2
 * 1.55.3: https://github.com/grpc/grpc/releases/tag/v1.55.3
 * 1.54.3: https://github.com/grpc/grpc/releases/tag/v1.54.3
 * 1.53.2: https://github.com/grpc/grpc/releases/tag/v1.53.2

Best regards,
-aj


-- 

AJ Heller
Software Engineer

h...@google.com

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B8UZUvVXRxsmFEKpZ1z2Qisy2wLUGzGGLjHFSGgnhE2ustu1w%40mail.gmail.com.


[grpc-io] Re: grpc ClientAsyncReaderWriter::Write() crashes: !byte_buffer->Valid()

2023-08-16 Thread 'AJ Heller' via grpc.io
Yes, you need to wait for the Write itself to complete before you can 
attempt another Write. It isn't really an issue of efficiency, it's more 
about the nature of a network connection, and a mechanism to signal to the 
application that another write can proceed. 
See 
https://grpc.github.io/grpc/cpp/classgrpc_1_1internal_1_1_async_writer_interface.html#a03f8532dfbd6c82c7d1fed5bc6e79d79.

Examining this example's use of ClientAsyncReaderWriter may be helpful to 
you as 
well https://github.com/grpc/grpc/blob/master/test/cpp/qps/client_async.cc

On Tuesday, August 15, 2023 at 7:26:02 PM UTC-7 黄舒心 wrote:

> Hi,
> I want to use ClientAsyncReaderWriter::Write() in a loop like this:
> for(int i = 0; i < n; i++){
> Write(msg);
> }
> But I got this error: proto_buffer_writer.h:65]   assertion failed: 
> !byte_buffer->Valid().
>
> Does that mean that I can't call Write() multiple times without waiting 
> tag from CompletionQueue?
>
> If I must wait for a tag, I can only write one message, and wait until 
> get the tag, then write the next message. I think it's very inefficient.
>
> So can I use ClientAsyncReaderWriter::Write() in a loop? If so, how to 
> resolve the  !byte_buffer->Valid() problem? If not, is there any other 
> method to call Write() multiple times without waiting for tag? Or any other 
> method that can Write()  efficiently
> ?
>
> Any advice is appreciated.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/15dc3406-cafc-4a5b-a43c-1da3a5ff1ee4n%40googlegroups.com.


[grpc-io] Re: gRPC stuck in epoll_wait state

2022-05-17 Thread 'AJ Heller' via grpc.io
If you're still having this issue, it would be worth trying to upgrade to 
gRPC v1.46.0 or newer. The default polling engine has been removed, so if 
there is still an underlying bug in gnmi or gRPC, it may show up in some 
other way.

On Monday, December 13, 2021 at 4:43:01 PM UTC-8 nupur uttarwar wrote:

> Hello,
>
> We are using gnmi-cli client to configure ports which sends a unary rpc 
> request to gRPC.
>
> Eg: sudo gnmi-cli set 
> "device:virtual-device,name:net_vhost0,host:host1,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-0,port-type:LINK"
>
> This was working fine with gRPC version 1.17.2. We are trying to upgrade 
> gRPC and other dependent modules used in our project. After upgrading to 
> 1.33 version, gnmi client send request is stuck in epoll_wait indefinitely. 
> Here is the back trace:
>
> 0x7f85e9bc380e in epoll_wait () from /lib64/libc.so.6
>
> (gdb) bt
>
> #0  0x7f85e9bc380e in epoll_wait () from /lib64/libc.so.6
>
> #1  0x7f85eb642864 in pollable_epoll(pollable*, long) () from 
> /usr/local/lib/libgrpc.so.12
>
> #2  0x7f85eb6432e9 in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #3  0x7f85eb64acd5 in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #4  0x7f85eb652cde in grpc_pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #5  0x7f85eb6b9c50 in cq_pluck(grpc_completion_queue*, void*, 
> gpr_timespec, void*) () from /usr/local/lib/libgrpc.so.12
>
> #6  0x7f85eb6b9ed3 in grpc_completion_queue_pluck () from 
> /usr/local/lib/libgrpc.so.12
>
> #7  0x7f85ea856f2b in 
> grpc::CoreCodegen::grpc_completion_queue_pluck(grpc_completion_queue*, 
> void*, gpr_timespec, void*) ()
>
>from /usr/local/lib/libgrpc++.so.1
>
> #8  0x005db71e in grpc::CompletionQueue::Pluck 
> (this=0x7ffec74be7e0, tag=0x7ffec74be840)
>
> at /usr/local/include/grpcpp/impl/codegen/completion_queue.h:316
>
> #9  0x005e7467 in 
> grpc::internal::BlockingUnaryCallImpl gnmi::SetResponse>::BlockingUnaryCallImpl (this=0x7ffec74beaa0,
>
> channel=, method=..., context=0x7ffec74beea0, 
> request=..., result=0x7ffec74bec40)
>
> at /usr/local/include/grpcpp/impl/codegen/client_unary_call.h:69
>
> #10 0x005d5dab in 
> grpc::internal::BlockingUnaryCall 
> (result=0x7ffec74be670, request=...,
>
> context=0x7ffec74bebf0, method=..., channel=) at 
> /usr/local/include/grpcpp/impl/codegen/client_unary_call.h:38
>
> #11 gnmi::gNMI::Stub::Set (this=, 
> context=context@entry=0x7ffec74beea0, request=..., 
> response=response@entry=0x7ffec74bec40)
>
> at p4proto/p4rt/proto/p4/gnmi/gnmi.grpc.pb.cc:101
>
> #12 0x0041de62 in gnmi::Main (argc=-951325536, 
> argv=0x7ffec74bee20) at /usr/include/c++/10/bits/unique_ptr.h:173
>
> #13 0x7f85e9aea1e2 in __libc_start_main () from /lib64/libc.so.6
>
> #14 0x0041a06e in _start () at /usr/include/c++/10/new:175
>
>  
>
> Comparing the successful and unsuccessful logs, I can see that grpc gets 
> stuck in epoll_wait state waiting for OP_COMPLETE event after 
> grpc_call_start_batch is started. 
>
> After investigating further, I can see that this issue started from 
> version 1.32.0, mainly after this commit(
> https://github.com/grpc/grpc/pull/23372). Just before this commit, it 
> works fine.
>
> Attached are the logs with with GRPC_TRACE=all,-timer_check,-timer and 
> GRPC_VERBOSITY=DEBUG for reference. List of the logs attached:
>
>
>- Trace logs with gRPC version 1.32.0 for unsuccessful request - 
>https://gist.github.com/nupuruttarwar/f97bbd7f339843c45ab48a10be065f0b 
>- Trace logs with gRPC version 1.32.0 for successful request before 
>abseil synchronization was enabled (at commit 
>52cde540a4768eea7a7a1ad0f21c99f6b51eedf7) - 
>https://gist.github.com/nupuruttarwar/2d36e56a791a88690ce4ac9fb01666f7 
>- Trace logs with gRPC version 1.17.2 for successful request - 
>https://gist.github.com/nupuruttarwar/62d6bcb277309fc878d7f348d57c3fb6 
>
> Any idea why this is happening? Please let me know if you need more logs 
> or any other information to assist further.
>
>  
>
> Thanks,
>
> Nupur Uttarwar
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1fc4dc9b-8ddb-4a9c-9618-4cf181447b39n%40googlegroups.com.


[grpc-io] Re: limiting grpc memory usage

2022-06-13 Thread 'AJ Heller' via grpc.io
Which gRPC library are you using? And which language? C++, Java, Python, etc
On Monday, June 6, 2022 at 11:42:26 AM UTC-7 amandee...@gmail.com wrote:

> So, we identified that it might be because of 
> CodedInputStream::ReadStringFallback in protocol buffers.
> We do not reserve the buffer upfront and use string's append repeatedly 
> for some reason. This leads to a string capacity of 8MB for a 4MB string.
>
> Any pointers would be helpful.
> On Thursday, May 26, 2022 at 2:52:14 PM UTC-4 amandee...@gmail.com wrote:
>
>> We have a mechanism to limit the memory used by a process. To make sure 
>> that there are no violators, we rely on maxrss of the process. We check 
>> maxrss every few mins to see if we had seen a spike in memory which was 
>> beyond the permitted value.
>>
>> We have a grpc server and what we are seeing is that for a request with 
>> 4MB of payload, the maxrss of the process is becoming slightly greater than 
>> 8MB. This limits our effective memory utilization to just half in most of 
>> the scenario without violating the memory limit. My guess it that this is 
>> because grpc is not zero copy. Is there a way to make grpc zero copy? If 
>> not, is there a way to limit the spike in memory when multiple requests 
>> come in? 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/98244c2a-ee10-40d0-a204-e90c55ae44cdn%40googlegroups.com.


[grpc-io] Re: C++ Async Server Performance Issue

2022-06-13 Thread 'AJ Heller' via grpc.io
It's hard to tell, given there are a few variables here. Are you running 
ghz on the same machine as the gRPC server? How many threads are being 
spawned in both scenarios? It might be valuable for you to run something 
like perf and analyze the results to see where both processes are spending 
their time.

On Tuesday, May 31, 2022 at 11:44:28 PM UTC-7 Roshan Chaudhari wrote:

> I am expecting approach 2 perform better. But this is not the case. Any 
> idea what I could be going wrong here?
>
> On Wednesday, June 1, 2022 at 12:12:25 PM UTC+5:30 Roshan Chaudhari wrote:
>
>> I am using async implementation of C++ server. I tried 2 approaches:
>>
>> 1. While starting up server, start only 1 outstanding RPC. When I receive 
>> client connection each of my Bidi RPC will schedule one outstanding RPC for 
>> next client. Once RPC finishes, I destroy my BidiState/BidiContext using 
>> "delete this". 
>>
>> 2. I know max number (n) of clients could try to connect my server. So I 
>> start n outstanding RPCs in the beggining. Once I get client request, I do 
>> not fire up outstanding RPC as in 1. Once RPC finishes, I refresh 
>> BidiState/BidiContext instead of calling "delete this". This will make sure 
>> I will always have number of outstanding RPCs = number of possible clients 
>> could connect.
>>
>> Now, I am using ghz benchmarking tool with the command:
>>
>> ghz -c 100 -n 100 --insecure --proto <>  --call <> 
>>
>> Approach 2:
>> Summary:
>>   Count:100
>>   Total:38.53 s
>>   Slowest:  12.01 ms
>>   Fastest:  0.33 ms
>>   Average:  3.08 ms
>>   Requests/sec: 25954.63
>>
>>
>> Latency distribution:
>>   10 % in 1.88 ms 
>>   25 % in 2.12 ms 
>>   50 % in 2.46 ms 
>>   75 % in 3.65 ms 
>>   90 % in 5.27 ms 
>>   95 % in 6.28 ms 
>>   99 % in 7.96 ms 
>>
>> Status code distribution:
>>   [OK]   100 responses 
>>
>> Approach 1:
>> Summary:
>>   Count:100
>>   Total:31.12 s
>>   Slowest:  10.21 ms
>>   Fastest:  0.88 ms
>>   Average:  2.68 ms
>>   Requests/sec: 32138.66
>>
>>
>> Latency distribution:
>>   10 % in 1.65 ms 
>>   25 % in 1.78 ms 
>>   50 % in 2.03 ms 
>>   75 % in 3.27 ms 
>>   90 % in 4.79 ms 
>>   95 % in 5.56 ms 
>>   99 % in 6.91 ms 
>>
>> Status code distribution:
>>   [OK]   100 responses   
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/125613a7-0f5b-43d5-bdfb-74b15a2f7f38n%40googlegroups.com.


Re: [grpc-io] Alpine package for `grpc-cli` is broken in alpine 3.15

2022-06-14 Thread 'AJ Heller' via grpc.io
I believe this should be fixed in the apline package build file. I've CC'd
the maintainer.

https://git.alpinelinux.org/aports/tree/community/grpc/APKBUILD?h=3.15-stable

On Tue, May 31, 2022 at 3:40 PM Blaine Nelson 
wrote:

> When I build the following docker image:
>
> ```
> FROM alpine:3.15
>
> RUN apk update && apk upgrade && apk add --no-cache grpc-cli
> ```
>
> and then run it using `docker run -it my_image:latest /bin/sh` it gives
> the following error:
>
> ```
> > grpc_cli
> Error loading shared library libgrpc++_test_config.so.1.42: No such file
> or directory (needed by /usr/bin/grpc_cli)
> Error relocating /usr/bin/grpc_cli: _ZN4grpc7testing8InitTestEPiPPPcb:
> symbol not found
> ```
>
> This package worked correctly in alpine 3.14 and I eventually found the
> issue was that the file was mis-named as
> `/usr/lib/libgrpc++_test_config.so.1.42.0`.  Creating a soft link resolved
> the issue:
>
> ```
> ln -s /usr/lib/libgrpc++_test_config.so.1.42.0
> /usr/lib/libgrpc++_test_config.so.1.42
> ```
>
> Can this issue be corrected in the Alpine 3.15 package itself?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/80b33d32-d261-4de2-986b-64337a6dddb0n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B8UZUuGQA0h4VZ0X8pJqD0to-XEx-PWrW3E1J1oXghe7oxawg%40mail.gmail.com.


[grpc-io] Re: what's difference between synchronous and asynchronous and callback for server?

2022-06-14 Thread 'AJ Heller' via grpc.io
Assuming we're talking about the C/C++ library, you're partially correct. 
Both the sync and callback API use multiple threads under the hood, created 
by gRPC. The Async CQ-based API requires the application to donate threads, 
so the thread count is controlled by you for the most part. For best 
performance, you'll likely want to donate <# of cpus> threads. See 
https://grpc.io/docs/guides/performance/

Best,
-aj

On Thursday, May 26, 2022 at 12:59:30 AM UTC-7 xied...@gmail.com wrote:

> I have read tutorial and google for this question, but still have some 
> confuse? here is my understand about their difference.
> 1. synchronous and  callback grpc itself manage request and response queue 
> and thread model, but asynchronous let user provide a thread management, am 
> I right?
> 2. sync and callback sytle is not multiple thread, am I right?
>
> thanks a very much
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6c991b42-0db1-4476-90b6-774518454f9dn%40googlegroups.com.


[grpc-io] Re: Server to client inverted rpc calls

2022-08-24 Thread 'AJ Heller' via grpc.io
I'm not familiar with third-party solutions in this space, but I don't 
believe there's a better answer today for reverse-tunneling with gRPC 
alone. The tracking issue for the work is here: 
https://github.com/grpc/grpc/issues/14101. You can manually wire this sort 
of thing up using BiDi streams, but it may not be terribly ergonomic. 

On Thursday, August 18, 2022 at 6:50:52 AM UTC-7 andre...@gmail.com wrote:

> Hi,
>
> I'm interested in a workflow where client would connect to a server and 
> server would call methods on a client (inverted communication).
>
> I followed an old discussion about a server-to-server communication:
> https://groups.google.com/g/grpc-io/c/Hfl-YotN5wg
>
> I also checked a Java based POC implementation.
> https://github.com/grpc/grpc-java/pull/3987
>
> I was wondering if there are any python client implementations that exist?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f973b159-ba27-4380-92f6-cbd1d262884en%40googlegroups.com.


[grpc-io] Re: gRPC C++ callback API

2022-10-19 Thread 'AJ Heller' via grpc.io
That's fine. You can replace the Wait call with 
`absl::SleepFor(absl::Seconds(42))` and it should work fine since the 
callback API does not need to borrow threads from the application. At 
shutdown, you'll still may want to wait on the gRPC server to finish doing 
its job before exiting, but that's a separate matter.
On Wednesday, October 12, 2022 at 3:13:35 PM UTC-7 Rohit Zambre wrote:

> Hi,
>
> Is it possible to use the callback API without calling server->Wait()?
>
> In the route_guide example, I see that route_guide_callback_sever.cc calls 
> server->Wait(). In my use case, I cannot block on a call like Wait().
>
> Regards,
> Rohit
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/95ef746c-dd34-4a14-b7be-c10e9930e1can%40googlegroups.com.


Re: [grpc-io] Re: gRPC C++ callback API

2022-10-21 Thread 'AJ Heller' via grpc.io
That's correct. You can read more about the async API 
here https://grpc.io/docs/languages/cpp/async/, specifically where it says 
you have to call `Next` to poll for events. The callback API leverages 
gRPC-internal threads to execute application-provided callbacks. The 
callback API is described in more detail 
here: https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md

On Thursday, October 20, 2022 at 5:03:26 PM UTC-7 Rohit Zambre wrote:

> Got it, thank you.
>
> This may be an orthogonal question -- does the async API (polling for 
> events on the completion queue) steal from the application's threads? In 
> other words, is the gRPC runtime running in the background even in the 
> async API?
> I have been using the async API under the assumption that the gRPC engine 
> is not invoked (except for accepting new connection requests) until I call 
> Next() or AsyncNext() on a completion queue.
>
> On Wed, Oct 19, 2022 at 11:36 AM 'AJ Heller' via grpc.io <
> grp...@googlegroups.com> wrote:
>
>> That's fine. You can replace the Wait call with 
>> `absl::SleepFor(absl::Seconds(42))` and it should work fine since the 
>> callback API does not need to borrow threads from the application. At 
>> shutdown, you'll still may want to wait on the gRPC server to finish doing 
>> its job before exiting, but that's a separate matter.
>> On Wednesday, October 12, 2022 at 3:13:35 PM UTC-7 Rohit Zambre wrote:
>>
>>> Hi,
>>>
>>> Is it possible to use the callback API without calling server->Wait()?
>>>
>>> In the route_guide example, I see that route_guide_callback_sever.cc 
>>> calls server->Wait(). In my use case, I cannot block on a call like Wait().
>>>
>>> Regards,
>>> Rohit
>>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/grpc-io/hnQ60W6hPMA/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/95ef746c-dd34-4a14-b7be-c10e9930e1can%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/95ef746c-dd34-4a14-b7be-c10e9930e1can%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a3e3fbae-3ef6-46a7-877a-fa06f7e7f8abn%40googlegroups.com.


[grpc-io] Re: grpc crash during streaming call

2022-09-13 Thread 'AJ Heller' via grpc.io
I see. I'm guessing that's your system package, maybe on Ubuntu or Debian?

Please try a build with the latest official release 
from https://github.com/grpc/grpc/releases. We are on v1.48 now, the v1.30 
release is 2 years old and outside of our maintenance window. Numerous bug 
fixes and implementation changes have happened since then.
On Tuesday, September 13, 2022 at 12:47:53 AM UTC-7 pragadeesh...@gmail.com 
wrote:

> Hi,
> I was using the below versions
> grpc: v1.30.0
> protobuf: 3.12.2
>
> I think these seems to be latest versions.
>
> On Tuesday, September 6, 2022 at 5:46:52 PM UTC+5:30 Pragadeesh nagaraj 
> wrote:
>
>> I am running grpc on a low resource system. grpc server is streaming 
>> messages to client, after sometime grpc is crashing.
>>
>> Please help me understand the behavior.
>>
>> bt for the same:
>> Program terminated with signal SIGABRT, Aborted.
>> #0  0x7fcf5ad9800b in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> [Current thread is 1 (Thread 0x7fcefa7fc700 (LWP 583))]
>> #0  0x7fcf5ad9800b in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #1  0x7fcf5ad77859 in abort () from /lib/x86_64-linux-gnu/libc.so.6
>> #2  0x7fcf5ade226e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #3  0x7fcf5adea2fc in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x7fcf5adea96b in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x7fcf5adeaaaf in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #6  0x7fcf5adecc83 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #7  0x7fcf5adef299 in malloc () from /lib/x86_64-linux-gnu/libc.so.6
>> #8  0x7fcf5a438043 in gpr_malloc () from /usr/local/lib/libgpr.so.10
>> #9  0x7fcf5a990069 in ru_slice_create(grpc_resource_user*, unsigned 
>> long) () from /usr/local/lib/libgrpc.so.10
>> #10 0x7fcf5a990840 in 
>> ru_alloc_slices(grpc_resource_user_slice_allocator*) () from 
>> /usr/local/lib/libgrpc.so.10
>> #11 0x7fcf5a991edb in 
>> grpc_resource_user_alloc_slices(grpc_resource_user_slice_allocator*, 
>> unsigned long, unsigned long, grpc_slice_buffer*) () from 
>> /usr/local/lib/libgrpc.so.10
>> #12 0x7fcf5a998d3a in tcp_continue_read((anonymous 
>> namespace)::grpc_tcp*) () from /usr/local/lib/libgrpc.so.10
>> #13 0x7fcf5a998eb9 in tcp_handle_read(void*, grpc_error*) () from 
>> /usr/local/lib/libgrpc.so.10
>> #14 0x7fcf5a7e7924 in 
>> grpc_core::Closure::Run(grpc_core::DebugLocation const&, grpc_closure*, 
>> grpc_error*) () from /usr/local/lib/libgrpc.so.10
>> #15 0x7fcf5a99903c in tcp_read(grpc_endpoint*, grpc_slice_buffer*, 
>> grpc_closure*, bool) () from /usr/local/lib/libgrpc.so.10
>> #16 0x7fcf5a974231 in grpc_endpoint_read(grpc_endpoint*, 
>> grpc_slice_buffer*, grpc_closure*, bool) () from 
>> /usr/local/lib/libgrpc.so.10
>> #17 0x7fcf5a92e240 in 
>> continue_read_action_locked(grpc_chttp2_transport*) () from 
>> /usr/local/lib/libgrpc.so.10
>> #18 0x7fcf5a92e157 in read_action_locked(void*, grpc_error*) () from 
>> /usr/local/lib/libgrpc.so.10
>> #19 0x7fcf5a973b48 in grpc_combiner_continue_exec_ctx() () from 
>> /usr/local/lib/libgrpc.so.10
>> #20 0x7fcf5a9865de in grpc_core::ExecCtx::Flush() () from 
>> /usr/local/lib/libgrpc.so.10
>> #21 0x7fcf5a97d847 in pollset_work(grpc_pollset*, 
>> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
>> #22 0x7fcf5a98592a in pollset_work(grpc_pollset*, 
>> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
>> #23 0x7fcf5a98d9f0 in grpc_pollset_work(grpc_pollset*, 
>> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
>> #24 0x7fcf5a9f40bc in cq_pluck(grpc_completion_queue*, void*, 
>> gpr_timespec, void*) () from /usr/local/lib/libgrpc.so.10
>> #25 0x7fcf5a9f4383 in grpc_completion_queue_pluck () from 
>> /usr/local/lib/libgrpc.so.10
>> #26 0x7fcf629ad14d in 
>> grpc::CoreCodegen::grpc_completion_queue_pluck(grpc_completion_queue*, 
>> void*, gpr_timespec, void*) () from /usr/local/lib/libgrpc++.so.1
>> #27 0x556bdacbcef4 in grpc_impl::CompletionQueue::Pluck 
>> (this=0x7fcefba8, tag=0x7fcefa7fbb30) at 
>> /usr/local/include/grpcpp/impl/codegen/completion_queue_impl.h:321
>> #28 0x7fcf5c4f4e27 in 
>> grpc_impl::ClientReader::Read
>>  
>> (this=0x7fcefb90, msg=0x556bdb10de10) at 
>> /usr/local/include/grpcpp/impl/codegen/sync_stream_impl.h:215
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b00457fb-a29b-4c1a-adbb-4cd43bf5919bn%40googlegroups.com.


[grpc-io] Re: grpc crash during streaming call

2022-09-11 Thread 'AJ Heller' via grpc.io
What version of gRPC are you using? Can you try with the latest release? 
The ru_alloc_slices code was rewritten a while ago, and it would be 
valuable to see if this bug still shows up or not.

On Tuesday, September 6, 2022 at 5:16:52 AM UTC-7 pragadeesh...@gmail.com 
wrote:

> I am running grpc on a low resource system. grpc server is streaming 
> messages to client, after sometime grpc is crashing.
>
> Please help me understand the behavior.
>
> bt for the same:
> Program terminated with signal SIGABRT, Aborted.
> #0  0x7fcf5ad9800b in raise () from /lib/x86_64-linux-gnu/libc.so.6
> [Current thread is 1 (Thread 0x7fcefa7fc700 (LWP 583))]
> #0  0x7fcf5ad9800b in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x7fcf5ad77859 in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x7fcf5ade226e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x7fcf5adea2fc in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #4  0x7fcf5adea96b in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #5  0x7fcf5adeaaaf in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #6  0x7fcf5adecc83 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #7  0x7fcf5adef299 in malloc () from /lib/x86_64-linux-gnu/libc.so.6
> #8  0x7fcf5a438043 in gpr_malloc () from /usr/local/lib/libgpr.so.10
> #9  0x7fcf5a990069 in ru_slice_create(grpc_resource_user*, unsigned 
> long) () from /usr/local/lib/libgrpc.so.10
> #10 0x7fcf5a990840 in 
> ru_alloc_slices(grpc_resource_user_slice_allocator*) () from 
> /usr/local/lib/libgrpc.so.10
> #11 0x7fcf5a991edb in 
> grpc_resource_user_alloc_slices(grpc_resource_user_slice_allocator*, 
> unsigned long, unsigned long, grpc_slice_buffer*) () from 
> /usr/local/lib/libgrpc.so.10
> #12 0x7fcf5a998d3a in tcp_continue_read((anonymous 
> namespace)::grpc_tcp*) () from /usr/local/lib/libgrpc.so.10
> #13 0x7fcf5a998eb9 in tcp_handle_read(void*, grpc_error*) () from 
> /usr/local/lib/libgrpc.so.10
> #14 0x7fcf5a7e7924 in grpc_core::Closure::Run(grpc_core::DebugLocation 
> const&, grpc_closure*, grpc_error*) () from /usr/local/lib/libgrpc.so.10
> #15 0x7fcf5a99903c in tcp_read(grpc_endpoint*, grpc_slice_buffer*, 
> grpc_closure*, bool) () from /usr/local/lib/libgrpc.so.10
> #16 0x7fcf5a974231 in grpc_endpoint_read(grpc_endpoint*, 
> grpc_slice_buffer*, grpc_closure*, bool) () from 
> /usr/local/lib/libgrpc.so.10
> #17 0x7fcf5a92e240 in 
> continue_read_action_locked(grpc_chttp2_transport*) () from 
> /usr/local/lib/libgrpc.so.10
> #18 0x7fcf5a92e157 in read_action_locked(void*, grpc_error*) () from 
> /usr/local/lib/libgrpc.so.10
> #19 0x7fcf5a973b48 in grpc_combiner_continue_exec_ctx() () from 
> /usr/local/lib/libgrpc.so.10
> #20 0x7fcf5a9865de in grpc_core::ExecCtx::Flush() () from 
> /usr/local/lib/libgrpc.so.10
> #21 0x7fcf5a97d847 in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
> #22 0x7fcf5a98592a in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
> #23 0x7fcf5a98d9f0 in grpc_pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.10
> #24 0x7fcf5a9f40bc in cq_pluck(grpc_completion_queue*, void*, 
> gpr_timespec, void*) () from /usr/local/lib/libgrpc.so.10
> #25 0x7fcf5a9f4383 in grpc_completion_queue_pluck () from 
> /usr/local/lib/libgrpc.so.10
> #26 0x7fcf629ad14d in 
> grpc::CoreCodegen::grpc_completion_queue_pluck(grpc_completion_queue*, 
> void*, gpr_timespec, void*) () from /usr/local/lib/libgrpc++.so.1
> #27 0x556bdacbcef4 in grpc_impl::CompletionQueue::Pluck 
> (this=0x7fcefba8, tag=0x7fcefa7fbb30) at 
> /usr/local/include/grpcpp/impl/codegen/completion_queue_impl.h:321
> #28 0x7fcf5c4f4e27 in 
> grpc_impl::ClientReader::Read
>  
> (this=0x7fcefb90, msg=0x556bdb10de10) at 
> /usr/local/include/grpcpp/impl/codegen/sync_stream_impl.h:215
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0adc5674-174b-4a58-8546-55d347b9a3ffn%40googlegroups.com.


[grpc-io] Re: grpc stops forward progress if DNS resolve has 0 addresses

2022-08-05 Thread 'AJ Heller' via grpc.io
That's mysterious, do you know what the state of the DNS records are when 
this occurs? And would it be possible for you to upgrade your gRPC library 
and try to reproduce this? v1.36.4 is over a year old, and a fair handful 
of bug fixes have gone in since then.

We've been unable to reproduce this failure in testing, and would 
> appreciate any pointers:
>

Regarding that, are you able to reproduce the conditions in which the 
failure occurs, or are they maybe not fully understood? e.g., run a local 
DNS server for testing, and modify its records.
 

>
>- what is supposed to re-kick a new DNS resolve if the server list is 
>empty?
>- where to check in the resolver code for an empty server list?
>- or any other ideas for how to track down the problem
>
>
> We're using grpc v1.36.4 w/ libcares2 1.14
>
> Regards,
> Peter Hurley
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/306779dd-0a68-4b95-851e-0a5979a4e872n%40googlegroups.com.


[grpc-io] Re: C++: how to handle blocking code from callback reactor

2022-12-01 Thread 'AJ Heller' via grpc.io
> Am I correct in assuming that I need to call my blocking application code 
in a new thread and pass the reactor (along with req/resp) to that new 
thread such that it can call reactor->Finish() once the work is done?

Yes, that's the most general recommendation, which you can find further 
down on that same gRFC you linked to: "One way of doing this is to push 
that code to a separate application-controlled thread."

On Wednesday, November 23, 2022 at 9:10:10 AM UTC-8 sha...@jalloq.co.uk 
wrote:

> Hi,
>
> I'm new to gRPC and am struggling to find examples of how to call 
> application code that blocks when using the callback API.
>
> I found the documentation here: 
> https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md and 
> noted that it recommend that code in any reaction must not block.  So for 
> an example of a simple unary reactor such as that in the 
> route_guide_callback_server.cc, what is the correct way of handing off to 
> your application code?
>
> Am I correct in assuming that I need to call my blocking application code 
> in a new thread and pass the reactor (along with req/resp) to that new 
> thread such that it can call reactor->Finish() once the work is done?
>
> Thanks, Shareef.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/033002b0-b926-464e-adc5-94a09a2f7d8cn%40googlegroups.com.


[grpc-io] Re: Adding Named Pipes feasibilty.

2023-02-22 Thread 'AJ Heller' via grpc.io
If you're asking about Named Pipes on Windows, there is no support at the 
moment. On posix systems, if you have a raw file descriptor for a named 
pipe, I believe you can create a channel from it using 
`CreateCustomInsecureChannelFromFd`: 
https://github.com/grpc/grpc/blob/420180c6d7a5ad00870f099d2e2c79ad367fe9ee/include/grpcpp/create_channel_posix.h#L40-L47.
 
Caveat: I have not tried this myself.

There is a long-standing open issue to design, scope, and add support for 
named pipes on Windows: https://github.com/grpc/grpc/issues/13447. The gRPC 
team will not have bandwidth to do this in the forseeable future, 
unfortunately.
On Sunday, February 19, 2023 at 9:40:35 PM UTC-8 valerij zaporogeci wrote:

> thank you, I am aware of that project. there is a mention from the author, 
> that his implementation uses custom wiring protocol and thus is 
> incompatible with other gRPC. the requirement to me is to be interoprable 
> with C++ client/server and C# client/server soultions. So, I am not sure 
> what is easier - reimplementing what Cyanfish has done in C# to C++ or 
> changing/adding directly to gRPC (core).
>
> понеділок, 20 лютого 2023 р. о 07:35:11 UTC+2 Manikandan V S пише:
>
>> We were also exploring the same and found this 
>> https://github.com/cyanfish/grpc-dotnet-namedpipes . We are yet to 
>> completely analyze it for our usage. 
>>
>> Its dotNet based one but you can still look at the underlying grpc 
>> implementation on how its done.
>>
>> If you find any other solution, Let me know.
>>
>> Regards,
>> Mani.
>>
>>
>> On Monday, 20 February 2023 at 10:56:47 UTC+5:30 valerij zaporogeci wrote:
>>
>>> Hello, someone wants to use Named Pipes, but still insists on relying on 
>>> gRPC in doing so (shrugs), so I need your help, as the knowledgeable ones, 
>>> in clarifying on how feasible it is to make gRPC use Named Pipes instead of 
>>> network or domain sockets. I am a C guy, so this C++ stuff looks scary for 
>>> me. I can't get what I need to change. ports are going down to the deepest 
>>> guts of grpc, from the very start you set ports and then it goes down to 
>>> the grpc core, where still ports are everywhere.
>>>
>>> I am not asking to make a research for me, just an evaluation, since you 
>>> understand grpc internals and it might be easy to you to answer if changing 
>>> transport from a network based one to something else is feasible or it 
>>> would be too hard, since grpc is not flexible with this respect and a bit 
>>> of direction of where the change should primarily happen - on the surface 
>>> (API) is enough (doubtfully), grpc core. where? should it be rewritten or 
>>> it's more an addition? also, I would want to make it in a compatible way, 
>>> so that code wouldn't stop working with the next gRPC update.
>>>
>>> thank you in advance.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b38e2b92-0ef6-455c-b835-f65f8cb4dc37n%40googlegroups.com.


[grpc-io] Re: callback API and details on threading model

2023-03-29 Thread 'AJ Heller' via grpc.io
Sometime this year, gRPC C++ will switch to by default have a single 
auto-scaling thread pool per process, and all of gRPC's threaded activities 
will utilize it. Applications will have some control over this, though, by 
being able to provide custom EventEngine instances per channel or per 
server. 
See 
https://github.com/grpc/grpc/blob/ec1d75bb0a24a626e669696bb48490e7ac40cc69/include/grpc/event_engine/event_engine.h

The question of "how many RPCs can I operate" will depend heavily on how 
much work your services are doing, and maybe how quickly the pool needs to 
scale.

Best,
-aj
On Thursday, March 23, 2023 at 5:12:49 AM UTC-7 Timo wrote:

> Hey Zach, I thought I answered this, but seems I missed. The question is 
> about C++.
>
> Naman Shah schrieb am Montag, 26. September 2022 um 05:31:14 UTC+2:
>
>> Hey Zach, I have the same question about the implementation in CPP. 
>>
>> On Thursday, September 22, 2022 at 4:54:19 AM UTC+8 Zach Reyes wrote:
>>
>>> What language of gRPC? That'll allow me to route it to the correct 
>>> person to answer.
>>>
>>> On Sunday, September 18, 2022 at 9:29:16 AM UTC-4 Timo wrote:
>>>
 I did research on this topic but did not find detailed information in 
 the documentation yet.
 How exactly does the thread model of the new callback API work?

 When using the synchronous API, the thread model I guess is this:
 - grpc owns threads, number can be limited
 - Several RPCs can operate on one thread, but there's a limit
 - When too many RPCs are open, the client receives a "resource 
 exhausted"
 - An application with multiple clients needs at least one thread per 
 each open RPC.

 In the callback (not asynchronous) API, I understand:
 - grpc owns threads and spawns new threads if needed
 - multiple RPCs can be handled on one thread non-blocking
 For the server, I wonder how this scales with many (don't have a number 
 in mind) RPCs being open. Assuming all 16 threads are spawned, how many 
 RPCs can I operate?
 Assuming I have an application with multiple clients implemented, each 
 connecting to different servers.
 Would all the clients be able to share the same thread pool, or would 
 (in worst case) each client spawn 16 threads?

 Especially when designing microservices where each service offers a 
 server, but can be a client to another service it may be important to not 
 scale threads too much.

 Thanks

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b895ca46-420e-4de8-ba72-06bc50c76fffn%40googlegroups.com.


[grpc-io] gRPC-Core Release 1.52.0

2023-02-09 Thread 'AJ Heller' via grpc.io
This is the release announcement for gRPC-Core 1.52.0 (gribkoff 
), covering 
the core library and the wrapped languages C++, C#, Objective-C, Python, 
PHP and Ruby. The release can be found here 
.

Release notes:

Core

   - [༺ EventEngine ༻] Specify requirements for Run* immediate execution. (
   #32028 )
   - Tracing: Add annotations for when call is removed from resolver result 
   queue and lb pick queue. (#31913 
   )
   - ring_hash LB: cap ring size to 4096 with channel arg to override. (
   #31692 )

C++
   
   - Cmake add separate export for plugin targets. (#31525 
   )

C#
   
   - Add internal documentation for Grpc.Tools MSBuild integration. (#31784 
   )

Python
   
   - Change Aio abort() function return type to NoReturn. (#31984 
   )
   - Change the annotated return type 
   of UnaryStreamCall and StreamStreamCall from AsyncIterable to AsyncIterator. 
   (#31906 )
   - Build native MacOS arm64 artifacts (universal2). (#31747 
   )
   - Respect CC variable in grpcio python build. (#26480 
   )
   - Revert "Build with System OpenSSL on Mac OS arm64 (#31096 
   )". (#31741 
   )

Ruby
   
   - Backport "[ruby]: add pre-compiled binaries for ruby 3.2; drop them 
   for ruby 2.6 #32089 " to 
   v1.52.x. (#32157 )
   - remove some default allocators. (#30434 
   )
   - Fix Ruby build errors in 3.2.0 on Apple M1. (#31997 
   )
   - [Ruby] build: make exported symbol files platform-specific. (#31970 
   )

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c210a017-b45f-4875-9db5-b3742117fa0cn%40googlegroups.com.


[grpc-io] Re: Number of threads created in grpc internally

2023-07-12 Thread 'AJ Heller' via grpc.io
I think you'll find these threads answer your question:

https://groups.google.com/g/grpc-io/c/j1A0CY0YG-A/m/W0H6UrkHAwAJ
https://stackoverflow.com/a/76591101/10161

Best,
-aj
On Monday, July 10, 2023 at 5:04:00 AM UTC-7 Softgigant S wrote:

> Hello!
>
> May I ask, how to setup or manipulate number of threads created by grpc 
> (event engine?)
> I have simple callback-based grcp server-client API.
>
> I used proc/pid/status to view the status of process of grpc client.
> It showed 14 threads, same as number of CPU cores on my PC.
> Is there any parameter to limit the number of threads used by grpc by 
> default?
>
> Thank you!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0b4a2aaa-3419-490c-a742-a3dc9435473bn%40googlegroups.com.


[grpc-io] Re: How to custom channel(endpoint)? [C++]

2023-06-20 Thread 'AJ Heller' via grpc.io
The gRPC public API now provides a way for custom endpoint implementations 
to be provided to the library. It's called the EventEngine API, and you can 
read the generated API docs here 

 (though 
I find the interface code itself 

 
more readable). To control the details of how bytes are sent and received, 
your task would be to write a complete EventEngine implementation and 
provide it to gRPC at runtime via the SetEventEngineFactory 

 
method.
On Wednesday, May 10, 2023 at 7:51:57 PM UTC-7 Saigut wrote:

> I want to manage the sending and receiving of bytes under grpc myself. And 
> I may use a 
> reliable transmission protocol rather than TCP.
>
> From this https://groups.google.com/g/grpc-io/c/6-DyXDp2WiY/m/kdAqjknABQAJ 
> ,  I know that we can create an cutom endpoint. 
>
> But how to achieve it? is there any document or example? 
>
> Thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e374c305-45e2-4c5e-af52-99347a8990cbn%40googlegroups.com.


Re: [grpc-io] grpc executor threads

2023-05-16 Thread 'AJ Heller' via grpc.io
Hello all, I want to offer a quick update. tl;dr: Jeff's analysis is 
correct. The executor is legacy code at this point, slated for deletion, 
and increasingly unused.

We have been carefully replacing the legacy I/O, timer, and async execution 
implementations with a new public EventEngine 

 API 
and its default implementations. The new thread pools do still auto-scale 
as needed - albeit with different heuristics, which are evolving as we 
benchmark - but threads are now reclaimed if/when gRPC calms down from a 
burst of activity that caused the pool to grow. Also, I believe the 
executor did not rate limit thread creation when closure queues reached 
their max depths, but the default EventEngine implementations do rate limit 
thread creation (currently capped at 1 new thread per second, but that's an 
implementation detail which may change ... some benchmarks have shown it to 
be a pretty effective rate). Beginning around gRPC v.148, you should see an 
increasing number of "event_engine" threads, and a decreasing number of 
executor threads. Ultimately we aim to unify all async activity into a 
single auto-scaling thread pool under the EventEngine.

And since the EventEngine is a public API, any integrators that want 
complete control over thread behavior can implement their own EventEngine 
and plug it in to gRPC. gRPC will (eventually) use a provided engine for 
all async execution, timers, and I/O. Implementing an engine is not a small 
task, but it is an option people have been requesting for years. Otherwise, 
the default threading behavior provided by gRPC is tuned for performance - 
if starting a thread helps gRPC move faster, then that's what it will do.

Hope this helps!
-aj

On Friday, May 12, 2023 at 4:03:58 PM UTC-7 Jiqing Tang wrote:

> Thanks so much Jeff, agree reaping them after they being idle would be 
> great.
>
> On Friday, May 12, 2023 at 6:59:28 PM UTC-4 Jeff Steger wrote:
>
>> This is as close to an explanation as I have found:
>>
>> look at sreecha’s response in
>> https://github.com/grpc/grpc/issues/14578
>>
>> tldr: 
>> “ The max number of threads can be 2x 
>> 
>>  the 
>> number cores and unfortunately its not configurable at the moment….. any 
>> executor threads and timer-manager you see are by-design; unless the 
>> threads are more than 2x the number of cores on your machine in which case 
>> it is clearly a bug”
>>
>>
>> From my observation of the thread count and from my examination of the 
>> grpc code (which I admit I performed some years ago), it is evident to me 
>> that the grpc framework spawns threads up to 2x the number of hardware 
>> cores. It will spawn a new thread if an existing thread in its threadpool 
>> is busy iirc. The issue is that the grpc framework never reaps idle 
>> threads. Once a thread is created, it is there for the lifetime if the grpc 
>> server. There is no way to configure the max number of threads either. It 
>> is really imo a sloppy design. threads aren’t free and this framework keeps 
>> (in my case) dozens and dozens of idle threads around even during long 
>> periods of low or no traffic. Maybe they fixed it in newer versions, idk. 
>>
>> On Fri, May 12, 2023 at 5:58 PM Jiqing Tang  wrote:
>>
>>> Hi Jeff and Mark,
>>>
>>> I just ran into the same issue with an async c++ GRPC server (version 
>>> 1.37.1), was curious about these default-executo threads and then got this 
>>> thread, did you guys figure out what these threads are for? The number 
>>> seems to be about 2x of the polling worker threads.
>>>
>>> Thanks!
>>>
>>> On Friday, January 7, 2022 at 3:47:51 PM UTC-5 Jeff Steger wrote:
>>>
 Thanks Mark, I will turn on trace and see if I see anything odd. I was 
 reading about a function called Executor::SetThreadingDefault(bool enable) 
 that I think I can safely call after i create my grpc server. It is a 
 public function and seems to allow me to toggle between a threaded 
 implementation and an async one. Is that accurate? Is calling this 
 function 
 safe to do and/or recommended (or at least not contra-recommended). Thanks 
 again for your help!

 Jeff



 On Fri, Jan 7, 2022 at 11:14 AM Mark D. Roth  wrote:

> Oh, sorry, I thought you were asking about the sync server threads.  
> The default-executor threads sound like threads that are spawned 
> internally 
> inside of C-core for things like synchronous DNS resolution; those should 
> be completely unrelated to the sync server threads.  I'm not sure what 
> would cause those threads to pile up.
>
> Try running with the env vars GRPC_VERBOSITY=DEBUG GRPC_TRACE=executor 
> and see if that yields any useful log information.  In particular, try 
> running that with a debug build, 

[grpc-io] gRPC-Core Release 1.57.0

2023-08-14 Thread 'AJ Heller' via grpc.io
This is the 1.57.0 (grounded
) release
announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C,
Python, PHP and Ruby. Latest release notes are here
.

This release contains refinements, improvements, and bug fixes, with
highlights listed below.
Core

   - [EventEngine] Change GetDNSResolver to return
   absl::StatusOr>. (#33744
   )
   - [deps] Remove libuv dependency. (#33748
   )
   - [ssl] Fix SSL stack to handle large handshake messages whose length
   exceeds the BIO buffer size. (#33638
   )
   - [BoringSSL] Update third_party/boringssl-with-bazel. (#33690
   )
   - [iomgr][EventEngine] Improve server handling of file descriptor
   exhaustion. (#33656 )
   - [ruby] experimental client side fork support. (#33430
   )
   - [core] Add a channel argument to set DSCP on streams. (#28322
   )
   - [xDS LB] xDS pick first support. (#33540
   )
   - [tls] Remove use of SSL_CTX_set_client_CA_list for TLS server
   credentials. (#33558 )
   - [EventEngine] Simplify EventEngine::DNSResolver API. (#33459
   )
   - [iomgr][Windows] Return proper error code to client when connection is
   reset. (#33502 )
   - [fork] simplify Fork::SetResetChildPollingEngineFunc to fix nested
   forking. (#33495 )
   - [lb pick_first] Enable random shuffling of address list. (#33254
   )
   - [HTTP2] Fix inconsistencies in keepalive configuration. (#33428
   )
   - [c-ares] Upgrade c-ares dependency to 1.19.1. (#33392
   )
   - [Rls] de-experimentalize RLS in XDS. (#33290
   )

C++

   - [otel] Add bazel dependency. (#33548
   )

C#

   - [csharp] Include correct build of Grpc.Tools in nightly packages. (
   #33595 )
   - [csharp] reintroduce base_namespace experimental option to C# (with a
   patch). (#33535 )

Objective-C

   - [Protobuf] Upgrade third_party/protobuf to 23.4. (#33695
   )

Python

   - [posix] Enable systemd sockets for libsystemd>=233. (#32671
   )
   - [python O11Y] Initial Implementation. (#32974
   )

Ruby

   - [ruby] experimental client side fork support (#33430
   )
   - [ruby] backport "[ruby] remove unnecessary background thread startup
   wait logic that interferes with forking #33805
   " to v1.57.x. (#33846
   )
   - [Ruby] remove manual strip in ruby ext conf. (#33641
   )
   - [ruby] simplify shutdown; remove unnecessary attempts at
   grpc_shutdown. (#33674 )
   - [ruby] Add -weak_framework CoreFoundation to link line. (#33538
   )
   - [Ruby] Fix memory leak in grpc_rb_call_run_batch. (#33368
   )
   - [Ruby] Fix memory leak in grpc_rb_server_request_call. (#33371
   )


-- 

AJ Heller
Software Engineer

h...@google.com

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B8UZUvLEr6M0QzMTqkFqsAVpXF8CZWrX0j5LY-f63KKu9i71A%40mail.gmail.com.


[grpc-io] Re: On Windows, where are trust certificates stored?

2024-02-13 Thread 'AJ Heller' via grpc.io
I think this is a general Windows problem, nothing gRPC-specific you'd want 
to do here. A quick googling turned up this: 
https://learn.microsoft.com/en-us/skype-sdk/sdn/articles/installing-the-trusted-root-certificate

On Monday, February 12, 2024 at 8:07:11 AM UTC-8 Andrew Bay wrote:

> gRPC is used as a library inside of the databricks-connect library in 
> python.  I cannot programmatically add a trust certificate for the server 
> it is connecting to.  Where can I put my firewall's MitM certificate so I 
> do not get "CERTIFICATE_VERIFY_FAILED" errors on a windows machine?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b2ddedd4-17ff-4a3a-a2fe-2f4946fbff37n%40googlegroups.com.


[grpc-io] Re: Setting gRPC internal threads' affinity

2024-02-06 Thread 'AJ Heller' via grpc.io
Dan,

Replying here on the mailing list thread.

> Thanks AJ!
>
> Using taskset is not an option for me, as my gRPC server is part of the 
executable that also does the more sensitive work IO load work. So what I 
need is an internal process differentiation between the threads' affinity. 
You wouldn't recommend patching the affinity into thd.cc because of the 
possible impact/side-effect  on grpc behavior? or is there another reason I 
should be aware of??

Maintaining patches against gRPC may make it difficult to upgrade your 
library. It's best to stay up to date with the latest gRPC versions if 
possible, to take advantage of bug fixes, performance improvements, new 
features, etc.. You'll also be hard-pressed to get support for a modified 
library, presuming you run into something tricky and want to post here or 
to Stackoverflow. Those are my main reservations, but they're subjective, 
please do what makes sense for your use case.

>
> As for the EventEngine interface - this looks very interesting, I will 
take a look. Thanks. Where can I find the default gRPC implementation of 
the EventEngine?

The Posix, Windows, and iOS implementations all live here 
https://github.com/grpc/grpc/tree/cb7172dc17c005e696d0b6945d2927a9e8bf81ac/src/core/lib/event_engine.
 
For learning purposes: Posix is the most complex/featureful, and Windows is 
comparatively simple.

>
> Thanks a lot,
> Dan  

On Monday, February 5, 2024 at 1:30:35 PM UTC-8 AJ Heller wrote:

> Hi Dan,
>
> If you're interested in CPU affinity for the entire server process on 
> Linux, you can use `taskset` https://linux.die.net/man/1/taskset. 
> Otherwise, you'll likely want to patch `thd.cc` and use pthread's affinity 
> APIs, but I don't recommend it.
>
> For more advanced use cases with the C/C++ library, you can also get full 
> control over the threading model and all async behavior by implementing a 
> custom EventEngine 
> 
> .
>
> Cheers,
> -aj
> On Thursday, January 25, 2024 at 10:26:07 AM UTC-8 Dan Cohen wrote:
>
>> Hello,
>>
>> I'm implementing an async gRPC server in c++.
>> I need to control and limit the cores that are used by gRPC internal 
>> threads (the completion queues handler threads are controlled by me) - i.e. 
>> I need to set those threads' affinity.
>>
>> Is there a way for me to do this without changing gRPC code? 
>> If not, where in the code would you recommend to start looking for 
>> changing this? 
>>
>> Thanks,
>> Dan
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f228b8ea-934d-4493-a092-fba9bcdadf2fn%40googlegroups.com.


[grpc-io] Re: Setting gRPC internal threads' affinity

2024-02-05 Thread 'AJ Heller' via grpc.io
Hi Dan,

If you're interested in CPU affinity for the entire server process on 
Linux, you can use `taskset` https://linux.die.net/man/1/taskset. 
Otherwise, you'll likely want to patch `thd.cc` and use pthread's affinity 
APIs, but I don't recommend it.

For more advanced use cases with the C/C++ library, you can also get full 
control over the threading model and all async behavior by implementing a 
custom EventEngine 

.

Cheers,
-aj
On Thursday, January 25, 2024 at 10:26:07 AM UTC-8 Dan Cohen wrote:

> Hello,
>
> I'm implementing an async gRPC server in c++.
> I need to control and limit the cores that are used by gRPC internal 
> threads (the completion queues handler threads are controlled by me) - i.e. 
> I need to set those threads' affinity.
>
> Is there a way for me to do this without changing gRPC code? 
> If not, where in the code would you recommend to start looking for 
> changing this? 
>
> Thanks,
> Dan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4255a086-3c2a-4fe7-8329-026c6da17340n%40googlegroups.com.


[grpc-io] Re: Using gRPC on localhost

2024-01-30 Thread 'AJ Heller' via grpc.io
Hi Dimitris,

AF_UNIX support is being added to Windows platforms, you can follow the 
work here: https://github.com/grpc/grpc/pull/34801

Best,
-aj

On Sunday, January 21, 2024 at 3:27:58 PM UTC-8 Dimitris Servis wrote:

> I want to use gRPC for IPC, on localhost, using C++, in particular for 
> Windows machines. I would like to avoid using https, and I wonder whether 
> the non-secure http poses any threat. I would like to use the UDS solution 
> but this is only supported in C# and not C++. I could use C# and CLI to 
> provide the functionality to C++, but I have a blocker with the C# version.
>
> So I am left with the option to either use https or implement something 
> myself. However this is becoming unchartered territory for me. I would 
> therefore like to ask, what do people think are my options?
>
> I tried to find a way to implement a transport layer using e.g. pipes only 
> for Windows as a plugin, but documentation is scarce...
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/91f13761-a19d-4a72-845d-69606e4f1c9fn%40googlegroups.com.


[grpc-io] Re: Using gRPC on localhost

2024-01-30 Thread 'AJ Heller' via grpc.io
Sorry, to clarify, the gRPC C/C++ library is getting support for unix 
sockets on Windows. As you pointed out, other languages/libraries/platforms 
already have that support.

On Tuesday, January 30, 2024 at 10:23:10 AM UTC-8 AJ Heller wrote:

> Hi Dimitris,
>
> AF_UNIX support is being added to Windows platforms, you can follow the 
> work here: https://github.com/grpc/grpc/pull/34801
>
> Best,
> -aj
>
> On Sunday, January 21, 2024 at 3:27:58 PM UTC-8 Dimitris Servis wrote:
>
>> I want to use gRPC for IPC, on localhost, using C++, in particular for 
>> Windows machines. I would like to avoid using https, and I wonder whether 
>> the non-secure http poses any threat. I would like to use the UDS solution 
>> but this is only supported in C# and not C++. I could use C# and CLI to 
>> provide the functionality to C++, but I have a blocker with the C# version.
>>
>> So I am left with the option to either use https or implement something 
>> myself. However this is becoming unchartered territory for me. I would 
>> therefore like to ask, what do people think are my options?
>>
>> I tried to find a way to implement a transport layer using e.g. pipes 
>> only for Windows as a plugin, but documentation is scarce...
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/96b01bd1-3593-4c4d-970d-13162a9ccb54n%40googlegroups.com.


[grpc-io] Re: Issue in cross compiling grpc for armv7l architecture

2024-04-15 Thread 'AJ Heller' via grpc.io
I don't use CMake, but I believe you can pass `-DgRPC_USE_SYSTEMD=OFF` 
(based 
on 
https://github.com/grpc/grpc/blob/84ee28e6956ed7cd51462aad52e64782dd5ca34b/cmake/systemd.cmake#L17)

Hope this helps,
-aj
On Wednesday, April 10, 2024 at 7:00:30 AM UTC-7 Pragadeesh nagaraj wrote:

> Need help on this issue, still not resolved.
>
> On Thu, 4 Apr, 2024, 9:55 pm Pragadeesh nagaraj,  
> wrote:
>
>> Hi, 
>> I need to cross compile grpc to be used in BeagleBone black for armv7l 
>> architecture.
>>
>> I tried cross compilation but got an error.
>>
>> Steps Followed,
>> 1. Installed grpc for linux host system x86_64 architecture - This is 
>> successful.
>> 2. Created Cmake Tool chain with the following details
>> *cat > toolchain.cmake <<'EOT'*
>> *SET(CMAKE_SYSTEM_NAME Linux)*
>> *SET(CMAKE_SYSTEM_PROCESSOR armv7l)*
>> *set(CMAKE_STAGING_PREFIX /opt/grpc)*
>> *set(CMAKE_C_COMPILER 
>> /home/user/Tool_Chain/arm_cross_compile/gcc-linaro-7.5.0-2019.12-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc)*
>> *set(CMAKE_CXX_COMPILER 
>> /home/user/Tool_Chain/arm_cross_compile/gcc-linaro-7.5.0-2019.12-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++)*
>> *set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)*
>> *set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)*
>> *set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)*
>> *set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)*
>> *EOT*
>> 3. Gave the Cmake options to build grpc
>> *mkdir -p "cmake/build_arm"*
>> *pushd "cmake/build_arm"*
>> *cmake 
>> -DCMAKE_TOOLCHAIN_FILE=/home/user/workspace/grpc/grpc_armv7l/toolchain.cmake 
>> \*
>> *  -DgRPC_INSTALL=ON \*
>> *  -DBUILD_SHARED_LIBS=ON \*
>> *  -DgRPC_BUILD_CSHARP_EXT=OFF \*
>> *  -DgRPC_BUILD_GRPC_CSHARP_PLUGIN=OFF \*
>> *  -DgRPC_BUILD_GRPC_NODE_PLUGIN=OFF \*
>> *  -DgRPC_BUILD_TESTS=OFF \*
>> *  -DCMAKE_INSTALL_PREFIX=/opt/grpc_armv7l \*
>> *  ../..*
>> *make "-j${GRPC_CPP_DISTRIBTEST_BUILD_COMPILER_JOBS}" install*
>> *popd*
>> 4. It is building some files, when it reaches the systemd_utils.cc, it is 
>> throwing * *No such file or directory error.
>> 5.Should this include be in the host machine or in the path of 
>> cross-compile tool chain includes path.
>> 6. I am using tool chain from *linaro. version - 
>> gcc-linaro-7.5.0-2019.12-x86_64_arm-linux-gnueabihf.*
>>
>> Need help in resolving the issue.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/dc49e7ca-273f-4e61-92d9-5f1bfe0a22ddn%40googlegroups.com.


[grpc-io] Re: Plugin failed with status code

2024-03-26 Thread 'AJ Heller' via grpc.io
Did you follow the quickstart guide to build and install gRPC and protocol 
buffers using cmake? https://grpc.io/docs/languages/cpp/quickstart/

Best,
-aj
On Monday, March 25, 2024 at 11:09:41 PM UTC-7 Suraj Kottayi wrote:

> How do i generate code using cmake, during configuration.??
> The issue with cmake command execute_process() is that the argument 
> "--plugin=protoc-gen-grpc=/usr/local/grpc_cpp_plugin"
> has to be absolute, trying to change it into a variable is throwing =>
>
> : program not found or is not executable
> Please specify a program using absolute path or make sure the program is 
> available in your PATH system variable
> --grpc_out: protoc-gen-grpc: Plugin failed with status code 1
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c19fd22c-5a8f-4742-9907-2ed59b704dc7n%40googlegroups.com.


[grpc-io] Re: How to resolve potential grpc::ClientBidiReactor data racing

2024-03-26 Thread 'AJ Heller' via grpc.io
Hi Zhanhui,

Just in case, please read through the callback API spec to refamiliarize 
yourself: 
https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md

Note that you can only have one outstanding read or write at a time. If you 
are calling StartWrite while a previous write's OnWriteDone method has not 
yet been called, that's improper use of the API (and effectively undefined 
behavior).

I hope this helps!

Best,
-aj
On Monday, March 18, 2024 at 7:52:40 PM UTC-7 Zhanhui Li wrote:

> Hi gRPC C/C++ community,
>
> I am building C/C++ client for an opensource project using gRPC and we 
> have a bidirectional stream API.
>
> For the client side implementation, we need to create a class, inherting 
> from ClientBidiRecator, overriding some methods.
>
> Note the OnXXDone methods has parameter "bool ok",  requiring "If false, 
> no new read/write operation will succeed, and any further Start* should 
> not be called."
>
> The problem is,  Start{Read, Write} call could be concurrent with the 
> OnXxxDone one.
> What would the consequences be if Start* called after On*Done with ok 
> being false? Is there a way to walkaround this?
>
> Best Regards!
>
> Zhanhui Li 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e0c7cbdb-b602-44b9-be6f-ade86a8c11adn%40googlegroups.com.