[grpc-io] Re: Async C++ server w/multi-threading approach

2023-10-26 Thread 'yas...@google.com' via grpc.io
> C++ gRPC team is working towards or maybe putting more effort in 
perfecting/optimizing the callback API approach?
Yes

> 1.- By using the callback API approach, will we be able to serve 
different users concurrently the same way we do with our current 
implementation?
Yes
> 2.- Will we need to implement a threading logic like the one we have, or 
is not needed?
Not needed with C++ callback API 

On Wednesday, October 25, 2023 at 6:21:28 PM UTC-7 Pedro Alfonso wrote:

> Hi Yas,
>
> First of all, thanks for coming back to us.
> That's a really important comment and please correct us if we are wrong, 
> the understanding is C++ gRPC team is working towards or maybe putting more 
> effort in perfecting/optimizing the callback API approach? and btw, we are 
> also agree, it's easier to use.
>
> Kindly help us with these additional questions:
>
> 1.- By using the callback API approach, will we be able to serve different 
> users concurrently the same way we do with our current implementation?
> 2.- Will we need to implement a threading logic like the one we have, or 
> is not needed?
>
> Thanks in advance.
>
> Regards,
>
> Pedro
> On Wednesday, October 25, 2023 at 1:17:23 PM UTC-5 yas...@google.com 
> wrote:
>
>> We have been recommending using the C++ callback API instead of the 
>> completion queue based API since it's easier to use. All performance 
>> optimizations that we are working are targeting the callback API.
>>
>> On Thursday, October 19, 2023 at 8:03:42 AM UTC-7 Pedro Alfonso wrote:
>>
>>> Hello,
>>>
>>> First let me explain what we have in our C++ gRPC Async server codebase:
>>>
>>> - We have 2 unary based response RPCs.
>>> - And we have 2 stream based response RPCs which will cover over 95% of 
>>> the client's API consumption, meaning they are really important to our 
>>> streaming based implementation.
>>>
>>> From the 2 stream based response RPCs, below one is the most critical to 
>>> us:
>>>
>>> // Inner class StreamAssetNodes
>>> class StreamAssetNodes : public RequestBase {
>>> public:
>>> StreamAssetNodes( AsyncAssetStreamerManager& owner ) : RequestBase( 
>>> owner ), ownerClass( owner ) {
>>> owner_.grpc().service_.RequestStreamAssetNodes(
>>> _, _, cq(), cq(), in_handle_.tag( Handle::Operation::
>>> CONNECT, [this, ]( bool ok, Handle::Operation /* op */ ) {
>>> LOG_DEBUG << "\n" + me( *this ) << "\n\n
>>> *\n"
>>> << "- Processing a new connect from " << context_.peer()
>>> << "\n\n
>>> *\n"
>>> << endl;
>>> cout << "\n" + me( *this ) << "\n
>>> *\n"
>>> << "- Processing a new connect from " << context_.peer() << "\n
>>> *\n"
>>> << endl;
>>>
>>> if ( !ok ) [[unlikely]] {
>>> LOG_DEBUG << "The CONNECT-operation failed." << endl;
>>> cout << "The CONNECT-operation failed." << endl;
>>> return;
>>> }
>>>
>>> // Creates a new instance so the service can handle requests from a new 
>>> client
>>> owner_.createNew( owner );
>>> // Reads request's parameters
>>> readNodeIds();
>>> } ) );
>>> }
>>>
>>> private:
>>> // Objects and variables
>>> AsyncAssetStreamerManager& ownerClass;
>>> ::Illuscio::AssetNodeIds request_;
>>> ::Illuscio::AssetNodeComponent reply_;
>>> ::grpc::ServerContext context_;
>>> ::grpc::ServerAsyncReaderWriter>> > stream_ { _ };
>>>
>>> vector nodeids_vector;
>>> // Contains mapping for all the nodes of a set of assets
>>> json assetsNodeMapping;
>>> // Contains mapping for all the nodes of a particular asset
>>> json assetNodeMapping;
>>> ifstream nodeFile;
>>> // Handle for messages coming in
>>> Handle in_handle_ { *this };
>>> // Handle for messages going out
>>> Handle out_handle_ { *this };
>>>
>>> int fileNumber = 0;
>>> const int chunk_size = 16 * 1024;
>>> char buffer[16 * 1024];
>>>
>>> // Methods
>>>
>>> void readNodeIds() {
>>> // Reads RPC request parameters
>>> stream_.Read( _, in_handle_.tag( Handle::Operation::READ, [this]( 
>>> bool ok, Handle::Operation op ) {
>>> if ( !ok ) [[unlikely]] { return; }
>>>
>>> // Assigns the request to the nodeids vector
>>> nodeids_vector.assign( request_.nodeids().begin(), request_.nodeids().
>>> end() );
>>> request_.clear_nodeids();
>>>
>>> if ( !nodeids_vector.empty() ) {
>>> ownerClass.assetNodeMapping = ownerClass.assetsNodeMapping[request_.uuid
>>> ()];
>>> if ( ownerClass.assetNodeMapping.empty() ) {
>>> stream_.Finish( grpc::Status( grpc::StatusCode::NOT_FOUND, "Asset's 
>>> UUID not found in server..." ),
>>> in_handle_.tag( Handle::Operation::FINISH, [this]( bool ok, Handle::
>>> Operation /* op */ ) {
>>> if ( !ok ) [[unlikely]] {
>>> LOG_DEBUG << "The FINISH request-operation failed." << endl;
>>> cout << "The FINISH request-operation failed." << endl;
>>> }
>>>
>>> LOG_DEBUG << "Asset's UUID not found in server: " << request_.uuid() << 
>>> endl;
>>> 

Re: [grpc-io] Java xDS Proxyless connection when client is also a gRPC server accessed via grpc-web

2023-10-26 Thread 'Feng Li' via grpc.io


On Tuesday, October 24, 2023 at 5:34:24 PM UTC-7 Oleg Cohen wrote:

Hi Feng,

Thank you for replying with options! Really appreciate!

Your reading this “as a functionality of istio to provide the templates to 
optimize your workload creation, and support gRPC-web on the edge of a 
proxyless gRPC mesh.” Is absolutely correct. I think such capability within 
Istio would be great.

If you don’t mind I have some follow-up questions regarding these options:

1. On the option to configure a separate Envoy Proxy. Would I add it a 
separate container in that same pod? Are there any resources on how to 
inject a container? Perhaps it is a generic k8s capability.

Yes, but it mostly depends on your needs. For convenience, you can add a 
container in the sam pod, as a sidecar which would scale up/down with your 
pod. It's a generic k8s capability. On the contrast, you can also configure 
Envoy as a separate edge service with a pod and scale it up/down 
independently.


2. I would love to improve the template! I looked into it, but I lack 
examples/documentation on how to it. Are there resources/examples I can 
look at how a template can be adjusted, extended, and deployed?

Istio forum would be a better place to get help. There are many systems use 
gRPC, istio is one of them, these systems on top of gRPC are responsible to 
help their customers streamline their configuration and they competing on 
that (which is a good thing for the whole ecosystem).


3. This is mine. I am considering to build my own xDS control plane in Java 
based on the Envoy control plane library. I have it working as a standalone 
server and now working to add k8s integration for endpoint identification. 
Perhaps it is too ambitious on my part and the above two options will do 
the job.


Again, thank you for the info!

Best regards,
Oleg




On Oct 24, 2023, at 3:35 PM, 'Feng Li' via grpc.io  
wrote:

Thanks for the question.
gRPC-web was natively supported in Envoy years ago, and as istio uses envoy 
as its sidecar proxy, you get the functionality for free.
However, with the istio template you mentioned, that sidecar proxy is 
removed.
Here are some options in my mind:
- Continue to use the istio grpc-agent template for both of backend and 
backend2 in your case. And configure a separate Enovy proxy to serve 
gRPC-web traffic in front of backend
- Or improve the istio template to allow the sidecar proxy for backend.

I read this as a functionality of istio to provide the templates to 
optimize your workload creation, and support gRPC-web on the edge of a 
proxyless gRPC mesh.

On Tuesday, October 17, 2023 at 8:46:37 PM UTC-7 Oleg Cohen wrote:

I was referring to this discussion: 
https://github.com/istio/istio/issues/40318

It does discuss the requirement I need. There are a couple points there:

   - There is a mentioning if this capability envisioned and implemented 
   but not documented
   - There is also a grpc-mixed.yaml custom template that is shared. I 
   wanted to try but not sure how such a file is deployed into istio.

Thank you!
Oleg

On Monday, October 16, 2023 at 8:52:13 PM UTC-7 Oleg Cohen wrote:

Greetings!

I have a use case as follows:

   - A React SPA Application (frontend) calls Service 1 using gRpc-Web
   - Service 1 (backend) calls Service 2 (backend2) via gRPC using 
   proxyless xDS

I have been able to build three deployments:

   - frontend
   - backend
   - backend2

backend uses a sidecar proxy and a grpc-web port to allow connection from 
frontend. This works well.

backend2 has a inject.istio.io/templates: grpc-agent annotation and the 
Service2 successfully initializes an XdsServer and finds GRPC_XDS_BOOTSTRAP 

The issue is how to initialize and use an xDS proxyless client within 
Service 1 to call Service 2. I am not able to use the annotation grpc-agent 
annotation on backend (Service 1) as it removes the sidecar and makes 
gRpc-Web impossible. 

I am wondering if there is a way to make this happen? I did see solutions 
about custom templates to have both annotations, but couldn't find an 
example. Not sure whether it is the right way.

Thank you!
Oleg  


-- 
You received this message because you are subscribed to the Google Groups "
grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5da87eb3-f8b4-4d36-9f79-5908e5b11188n%40googlegroups.com
 

.


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3f6d8a6f-4a3c-4549-a6a4-1da91cbc36e1n%40googlegroups.com.