[grpc-io] gRFC A78: Non-per-call metrics for WRR, Pick First, and XdsClient

2024-02-23 Thread 'Mark D. Roth' via grpc.io
I've published a gRFC for non-per-call metrics for WRR, Pick First, and XdsClient: https://github.com/grpc/proposal/pull/419 This is in conjunction with the gRFC that Yash recently posted for the non-per-call metric framework. Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc.

[grpc-io] gRFC L113: Remove C-core grpc_channel_num_external_connectivity_watchers() function

2024-02-07 Thread 'Mark D. Roth' via grpc.io
I've written a gRFC proposing to remove the grpc_channel_num_external_connectivity_watchers() function from the C-core API: https://github.com/grpc/proposal/pull/417 Comments welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to

[grpc-io] gRFC A75: xDS Aggregate Cluster Behavior Fixes

2023-12-28 Thread 'Mark D. Roth' via grpc.io
I've written a gRFC describing fixes in xDS aggregate cluster behavior, including fixing stateful session affinity to work across priorities: https://github.com/grpc/proposal/pull/405 Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you

[grpc-io] gRFC A74: xDS Config Tears

2023-12-28 Thread 'Mark D. Roth' via grpc.io
I've written a gRFC describing some structural changes that we're going to make to improve how we deal with certain config tear cases in xDS: https://github.com/grpc/proposal/pull/404 Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you

Re: [grpc-io] Re: gRPC, C++ - Server side 'service_config' not recognized by clients

2023-12-13 Thread 'Mark D. Roth' via grpc.io
There is no way to set the service config on the gRPC server side. The name resolver has nothing to do with the gRPC server side; the name resolver is the component in the client that resolves a name into a set of addresses, so that the client knows what servers to connect to. For more

[grpc-io] Re: The grpc::ServerInterface::Shutdown() behavior

2023-12-12 Thread 'Mark D. Roth' via grpc.io
The behavior is documented in our API reference . It does cancel all pending RPCs. Note that applications using the CQ-based async API are responsible for noting that the calls have been

[grpc-io] Re: gRPC, C++ - Server side 'service_config' not recognized by clients

2023-12-12 Thread 'Mark D. Roth' via grpc.io
The service config is not sent by the gRPC server. It cannot be done that way, because the service config sets parameters that are needed on the client before the client has contacted any server. Instead, the service config is intended to be returned to the client via the name resolver.

[grpc-io] Re: Protobuff for R

2023-12-12 Thread 'Mark D. Roth' via grpc.io
This sounds like a protobuf question, not a question about gRPC, so I don't think this is the right mailing list. On Wednesday, December 6, 2023 at 10:38:30 AM UTC-8 shailesh gavathe wrote: > Is there a good example on how to generate protofbuff for R using > RProtobuf package? > > I have

[grpc-io] gRPC-Core Release 1.60.0

2023-12-08 Thread 'Mark D. Roth' via grpc.io
This is the 1.60.0 (gjallarhorn) release announcement for gRPC-Core and the wrapped languages C++, Objective-C, Python, PHP and Ruby. Latest release notes are here . This release contains refinements, improvements, and bug fixes, with highlights

[grpc-io] gRFC A61: IPv4 and IPv6 Dualstack Backend Support

2023-10-02 Thread 'Mark D. Roth' via grpc.io
I have published a gRFC for dualstack backend support: https://github.com/grpc/proposal/pull/356 Comments welcome! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and

Re: [grpc-io] gRPC as dynamic library interface (no network stack or IPC)

2023-08-16 Thread 'Mark D. Roth' via grpc.io
The closest thing we currently have to this is the in-process transport, which uses the normal gRPC API but speaks to another process on the same machine instead of using the networking stack. This still uses IPC, but it avoids all of the overhead of TCP, HTTP/2, and gRPC framing on the wire.

Re: [grpc-io] C++: load balancing IP's

2023-06-30 Thread 'Mark D. Roth' via grpc.io
C++ does not yet support a public API for resolvers or LB policies. This is something we would very much like to do, but the current internal APIs are still dependent upon some ugliness in our legacy polling code that we don't want to expose. Once our migration to EventEngine

[grpc-io] gRFC A65: mTLS Credentials in xDS Bootstrap File

2023-05-24 Thread 'Mark D. Roth' via grpc.io
I've written a gRFC for adding support to configure mTLS to talk to xDS servers: https://github.com/grpc/proposal/pull/372 Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To

[grpc-io] gRFC A63: xDS StringMatcher in Header Matching

2023-05-03 Thread 'Mark D. Roth' via grpc.io
I've created a gRFC for supporting xDS StringMatcher in header matching: https://github.com/grpc/proposal/pull/359 Comments welcome! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe

[grpc-io] gRPC-Core Release 1.53.0

2023-03-27 Thread 'Mark D. Roth' via grpc.io
This is the 1.53.0 (glockenspiel ) release announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, Python, PHP and Ruby. Latest release notes are here . This release

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-21 Thread 'Mark D. Roth' via grpc.io
I don't think there are any remaining open questions here, but since you mentioned Envoy's least-request policy, I wanted to provide just a little more information. First, note that there is a community-contributed design for Envoy's least-request LB policy in gRFC A48

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-14 Thread 'Mark D. Roth' via grpc.io
On Tue, Feb 14, 2023 at 12:45 AM Tommy Ulfsparre wrote: > > So that goes back to what I was saying earlier: I think this would > result in incorrect weights, because each client will see only a fraction > of the in-flight requests. This would no longer be weighting the backends > based on CPU

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-13 Thread 'Mark D. Roth' via grpc.io
On Mon, Feb 13, 2023 at 2:11 PM Tommy Ulfsparre wrote: > > If you want the in-flight requests to be reported by the server, then I > don't see why we'd need another metric here > > I see the confusion here. I don't want in-flight requests to be reported > by the server. The client will keep

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-13 Thread 'Mark D. Roth' via grpc.io
On Mon, Feb 13, 2023 at 1:19 PM Tommy Ulfsparre wrote: > > > I don't think we'd want the client to do its own tracking of in-flight > requests to each endpoint, because the endpoint may also be receiving > requests from many other endpoints at the same time, and the client would > not see those,

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-13 Thread 'Mark D. Roth' via grpc.io
I don't think we'd want the client to do its own tracking of in-flight requests to each endpoint, because the endpoint may also be receiving requests from many other endpoints at the same time, and the client would not see those, so it could result in incorrect weights. I think it's both more

Re: [grpc-io] Re: A58: Weighted Round Robin LB Policy

2023-02-13 Thread 'Mark D. Roth' via grpc.io
This design does not actually use any info about in-flight requests or network latencies. It weights backends purely by the CPU utilization and request rate reported by the endpoint. It's certainly possible to write an LB policy that weights on in-flight requests or network latency, but that's

Re: [grpc-io] Migration from WCF to gRPC core

2022-12-16 Thread 'Mark D. Roth' via grpc.io
I think you can do something like this using a bidi stream. For example, consider the following API: // A message to be sent from the client to the server when the client changes the state of a component. message ChangeComponentState { string name = 1; // Component name. // ...other fields

Re: [grpc-io] Migration from WCF to gRPC core

2022-12-15 Thread 'Mark D. Roth' via grpc.io
gRPC does not provide a way to start an RPC from the server side. RPCs are always started from the client side. The general approach I would recommend in this kind of situation is to use a bidi streaming call that the client keeps open at all times, so that the server can use that to send a

Re: [grpc-io] Re: Lib to link with visual studio...

2022-12-08 Thread 'Mark D. Roth' via grpc.io
nd this is because the linker crashed. > In no place is written the libs that it has to be used to link, the > incompatibility with the version, etc.. but with linux is the same. > Many Thanks. > > > El jue, 8 dic 2022 a las 19:45, 'Mark D. Roth' via grpc.io (< > grp...@googlegro

[grpc-io] Re: Lib to link with visual studio...

2022-12-08 Thread 'Mark D. Roth' via grpc.io
It's hard to know how to help you without more information. Can you point us to exactly which code you're trying to build, what build command you're using, and what error message you're getting? On Sunday, December 4, 2022 at 5:30:55 AM UTC-8 helirro...@gmail.com wrote: > Hello all, I'm

[grpc-io] Re: Compile grpc with C++11-only compiler support

2022-11-30 Thread 'Mark D. Roth' via grpc.io
gRPC 1.46 was the last version supporting only C++11. https://github.com/grpc/proposal/blob/master/L98-requiring-cpp14.md On Monday, November 28, 2022 at 12:16:14 PM UTC-8 Jeremy Pallotta wrote: > The latest versions of gRPC require compiler support for C++14. > > I need to use gRPC with CentOS

[grpc-io] gRFC A57: XdsClient Failure Mode Behavior

2022-10-26 Thread 'Mark D. Roth' via grpc.io
I've just shared the following gRFC to define XdsClient failure mode behavior: https://github.com/grpc/proposal/pull/335 Comments welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To

[grpc-io] gRFC A30 update: gRPC dropping support for xDS v2

2022-10-12 Thread 'Mark D. Roth' via grpc.io
We are changing gRFC A30 to allow us to drop xDS v2 support, as per the following PR: https://github.com/grpc/proposal/pull/333 Please let us know if you have any questions or concerns. Thanks! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are

Re: [grpc-io] Basic grpc question - Sync versus async c++ server

2022-09-21 Thread 'Mark D. Roth' via grpc.io
Yes, the main difference between the sync and async APIs is performance, both in terms of resources used (the sync API ties up threads while waiting for results, which can be avoided using the async API) and in terms of actual RPC performance (throughput, latency, etc). For some performance

Re: [grpc-io] [grpc/core][xds] What's the reason to time locality lb weight and endpoint lb weight in CreateChildPolicyAddressesLocked ?

2022-09-07 Thread 'Mark D. Roth' via grpc.io
In the case of the xDS ROUND_ROBIN policy, where we first use weighted_target to pick the locality and then use round_robin to pick the endpoint within that locality, this multiplication isn't needed (although it doesn't hurt either, since the resulting weights are still proportional to the

Re: [grpc-io] Re: grpc stops forward progress if DNS resolve has 0 addresses

2022-08-31 Thread 'Mark D. Roth' via grpc.io
Looking at our code more closely, it looks like there is a bug here. If the resolver returns an error for the addresses on the very first resolution attempt, it looks like we will get into a state where nothing will re-resolve. It looks like this bug has been here for a long time, so I'm

Re: [grpc-io] latest status of gRPC over QUIC

2022-08-17 Thread 'Mark D. Roth' via grpc.io
QUIC is supported in Java and in C-core via Cronet. It is not currently supported in Go. I don't believe that we have any benchmarks using QUIC. In general, QUIC hasn't been a priority for us. On Mon, Aug 15, 2022 at 11:40 AM Jiaxin Shan wrote: > Hi community, > > I am investigating whether

Re: [grpc-io] Re: gRPC C++ how to enforce different authentication for methods in same service

2022-08-17 Thread 'Mark D. Roth' via grpc.io
It sounds to me like what you really want here is authorization policy, not authentication control. I suggest that you look at the gRPC authz API, as described in gRFC A43 . On Wed, Aug 10, 2022 at 3:22 AM Philipp T

Re: [grpc-io] failover when Deadline Exceeded

2022-08-17 Thread 'Mark D. Roth' via grpc.io
It's important to understand the difference between a connection and an RPC. There are generally many RPCs sent on a given connection, and DEADLINE_EXCEEDED is a failure status for an individual RPC, not for a connection. In the general case, just because one individual RPC failed does not mean

Re: [grpc-io] gRPC TCP connect timeout value

2022-06-22 Thread 'Mark D. Roth' via grpc.io
There should be options to reduce the initial connection timeout, but the details depend on what language you're using. If you're using a C-core-based language, you can use the GRPC_ARG_MIN_RECONNECT_BACKOFF_MS

Re: [grpc-io] [Question] Using grpc compression weirdly leads to low throughput

2022-06-22 Thread 'Mark D. Roth' via grpc.io
It's hard to say what's going on here without knowing more about your environment. A few things to consider: - Compression is negotiated between client and server based on what algorithms are supported by each one, as described in

Re: [grpc-io] C++ helloworld greeter_client data races

2022-06-22 Thread 'Mark D. Roth' via grpc.io
This looks like a bug. Please file an issue on https://github.com/grpc/grpc. Thanks! On Fri, Jun 17, 2022 at 7:29 PM Zhiying Liang wrote: > Hello, > > At first, we didn't build grpc-c++ with TSAN. When we ran greeter_server > and greeter_client in the cpp/helloworld example, TSAN complained

Re: [grpc-io] gRPC Serialization Trait and grpc::Slice.

2022-06-22 Thread 'Mark D. Roth' via grpc.io
Unfortunately, we don't currently have any support for this kind of zero-copy read on the receiving side. In the long run, we have some ideas about how we'd like to support this kind of thing, but we haven't had time to turn our attention to this yet, so we have no concrete plan or ETA yet. But I

Re: [grpc-io] C++: Custom method handlers

2022-06-22 Thread 'Mark D. Roth' via grpc.io
We don't actively support exceptions, since our style guide prohibits them ( https://google.github.io/styleguide/cppguide.html#Exceptions), so we're unlikely to put any work into providing hooks for better exception handling. However, if you wanted to propose a gRFC

[grpc-io] gRFC A53: Option for Ignoring xDS Resource Deletion

2022-05-19 Thread 'Mark D. Roth' via grpc.io
I've published a gRFC for adding an option to the xDS bootstrap file to tell the client to ignore resource deletions sent by the server: https://github.com/grpc/proposal/pull/302 Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are

Re: [grpc-io] xds: Initial Request on the ADS Stream contains only one resource

2022-03-17 Thread 'Mark D. Roth' via grpc.io
Thanks for confirming! I've just merged the PR, so this change should be in 1.46. On Thu, Mar 17, 2022 at 3:21 PM František Bořánek wrote: > > Works like a charm. All current channels are included in a first message > during reconnection. > > Thanks very much. > > Here is from debug log: > >

Re: [grpc-io] xds: Initial Request on the ADS Stream contains only one resource

2022-03-17 Thread 'Mark D. Roth' via grpc.io
Thanks for reporting this! I think I see the problem, and it should be fixed by https://github.com/grpc/grpc/pull/29144. I don't see a trivial way to write a test for it, since the current behavior is not broken, just slightly sub-optimal. But feel free to try out the patch and let me know if

Re: [grpc-io] xds: Initial Request on the ADS Stream contains only one resource

2022-03-17 Thread 'Mark D. Roth' via grpc.io
That's very interesting. Can you run with the following environment variables and send me the log? GRPC_VERBOSITY=DEBUG GRPC_TRACE=xds_client,xds_resolver On Wed, Mar 16, 2022 at 10:53 PM František Bořánek wrote: > C++, the latest 1.44.0 release. > > 16. 3. 2022 v 21:32, Mark D. Roth : > > 

Re: [grpc-io] xds: Initial Request on the ADS Stream contains only one resource

2022-03-16 Thread 'Mark D. Roth' via grpc.io
It is valid behavior, but I agree that it's a little sub-optimal for the client to send two messages when it reestablishes the stream. What language are you using gRPC in? On Wed, Mar 16, 2022 at 1:19 PM František Bořánek wrote: > It makes sense. > > However, in this case, the initial

Re: [grpc-io] xds: Initial Request on the ADS Stream contains only one resource

2022-03-16 Thread 'Mark D. Roth' via grpc.io
The gRPC client does not know a priori what set of resource names are available on the xDS server, and even if it did, it would not request all of them proactively, because it may not actually need all of them. Instead, each time a gRPC channel is created with an "xds:" target URI, that tells gRPC

[grpc-io] Re: Stream is closed for idle connection

2022-01-19 Thread 'Mark D. Roth' via grpc.io
There are a number of reasons why the stream might be closed. The server might close the individual stream, or the underlying connection as a whole might be dropped, which will implicitly close all streams active at the moment when the connection is dropped. When stream->Read() returns false,

Re: [grpc-io] grpc executor threads

2022-01-07 Thread 'Mark D. Roth' via grpc.io
No, that's not a public API, and you should not call it directly. (It may be public in the class, but the class is not part of the gRPC public API.) On Fri, Jan 7, 2022 at 12:47 PM Jeff Steger wrote: > Thanks Mark, I will turn on trace and see if I see anything odd. I was > reading about a

Re: [grpc-io] grpc executor threads

2022-01-07 Thread 'Mark D. Roth' via grpc.io
Oh, sorry, I thought you were asking about the sync server threads. The default-executor threads sound like threads that are spawned internally inside of C-core for things like synchronous DNS resolution; those should be completely unrelated to the sync server threads. I'm not sure what would

Re: [grpc-io] grpc executor threads

2022-01-06 Thread 'Mark D. Roth' via grpc.io
The C++ sync server has one thread pool for both polling and request handlers. When a request comes in, an existing polling thread basically becomes a request handler thread, and when the request handler completes, that thread is available to become a polling thread again. The MIN_POLLERS and

Re: [grpc-io] grpc executor threads

2022-01-04 Thread 'Mark D. Roth' via grpc.io
I answered this in the other thread you posted on. On Sun, Jan 2, 2022 at 9:39 AM Jeff Steger wrote: > grpc-java has a method in its ServerBuilder class to set the Executor. Is > there similar functionality for grpc-c++ ? I am running a C++ grpc server > and the number of executor threads it

Re: [grpc-io] Re: C++ synchonous grpc server question

2022-01-04 Thread 'Mark D. Roth' via grpc.io
has a method in its ServerBuilder class to set the Executor. Is > there similar functionality for grpc-c++ ? I am running a C++ grpc server > and the number of executor threads it spawns is high and seems to never > decrease, even when connections stop. > > On Wed, May 19, 2021 at 1:1

[grpc-io] gRPC-Core Release 1.42.0

2021-12-01 Thread 'Mark D. Roth' via grpc.io
This is the 1.42.0 (granola) release announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, Python, PHP and Ruby. Latest release notes are here . This release contains refinements, improvements, and bug fixes, with highlights

[grpc-io] gRFC A47: xDS Federation

2021-10-20 Thread 'Mark D. Roth' via grpc.io
I've put together a gRFC for xDS Federation support: https://github.com/grpc/proposal/pull/268 Feedback welcome! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and

Re: [grpc-io] completion queue distribution, memory footrpint

2021-10-06 Thread 'Mark D. Roth' via grpc.io
On Sun, Oct 3, 2021 at 4:33 PM TataG wrote: > Hi gRPC group members. > > I was looking how the RPC's are distributed on server side. > In grpc_bench c++ multithread example > https://github.com/LesnyRumcajs/grpc_bench/blob/master/cpp_grpc_mt_bench/main.cpp, > they have multiple threads handling

Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-13 Thread 'Mark D. Roth' via grpc.io
(Adding AJ, who's driving the EventEngine effort.) AJ, it looks like Sureshbabu wants to be an early tester of the new EventEngine code on Windows. Please coordinate with him when we get to a point where the new code is actually ready for testing (specifically the client-side endpoint code). On

Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-08 Thread 'Mark D. Roth' via grpc.io
It sounds like this is a Windows-specific problem, which unfortunately means that we probably can't help you much in the short term, since we don't have any spare cycles to focus on Windows-specific performance. As I mentioned earlier, the Windows-specific TCP code in gRPC will be replaced by the

Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-07 Thread 'Mark D. Roth' via grpc.io
Thanks, that's helpful. >From the trace, it looks like you're running on Windows. Most of our performance efforts have been focused on Linux, not Windows, so it may be that this is just an inefficiency in gRPC's Windows TCP code. Can you run the client on a Linux machine to see if it makes a

[grpc-io] gRFC A46: xDS NACK Semantics Improvement

2021-09-03 Thread 'Mark D. Roth' via grpc.io
I've published a gRFC for improving xDS NACK semantics: https://github.com/grpc/proposal/pull/260 Feedback welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group

Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-03 Thread 'Mark D. Roth' via grpc.io
I don't see anything obviously wrong with your code. Since this test is sending RPCs serially instead of in parallel, it's possible that there are too many network round-trips happening here, each one of which would increase latency because the next operation is blocking on the previous one. Can

Re: [grpc-io] Re: grpc c++ performance - help required

2021-09-01 Thread 'Mark D. Roth' via grpc.io
I'm so sorry for not responding sooner! For some reason, gmail tagged your messages as spam, so I didn't see them. :( On Fri, Aug 27, 2021 at 10:55 PM Sureshbabu Seshadri < sureshbabu8...@gmail.com> wrote: > Dear GRPC team, > Can any one help on this? > > On Friday, August 13, 2021 at 12:53:21

Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread 'Mark D. Roth' via grpc.io
I am not familiar with java-control-plane, so I can't answer that. You might try asking in their developer community. On Mon, Aug 16, 2021 at 10:55 AM Lukáš Drbal wrote: > Hello Mark, > > as first thanks a lot for Your replay, it is more clear for me now. > > Do you have any idea how to

Re: [grpc-io] Node group based XDS routing

2021-08-16 Thread 'Mark D. Roth' via grpc.io
The xDS protocol does not require the node information to be sent by the client for every request on the stream; the client needs to send it only on the first request on the stream. Quoting this section of the xDS spec

[grpc-io] Re: grpc c++ performance - help required

2021-08-11 Thread 'Mark D. Roth' via grpc.io
You can check to see whether the problem is a channel startup problem or a latency problem by calling channel->WaitForConnected(gpr_inf_future(GPR_CLOCK_MONOTONIC)) before you start sending RPCs on the channel. That call won't return until the channel has completed the DNS lookup and

[grpc-io] Re: c++ CallbackApi: thread safety

2021-08-11 Thread 'Mark D. Roth' via grpc.io
There can only be one outstanding read or write at a given time on a given stream, but reads and writes can happen in parallel. So you're guaranteed that no more than one thread will be in OnReadDone() at once, and no more than one thread will be in OnWriteDone() at once, but you could have

[grpc-io] Re: large file transfer with gRPC

2021-08-11 Thread 'Mark D. Roth' via grpc.io
I don't think we have any benchmarks for this kind of use-case. I think the performance would probably depend on the application protocol, like how you split up the file contents into messages sent on the gRPC stream. On Friday, August 6, 2021 at 5:43:09 PM UTC-7 manor parmar wrote: > I have

Re: [grpc-io] Re: Are any of the gRPC devs supporting C++?

2021-07-20 Thread 'Mark D. Roth' via grpc.io
hs to get working not 5 minutes before your > response :P > > On Tue, Jul 20, 2021 at 11:32 AM 'Mark D. Roth' via grpc.io < > grpc-io@googlegroups.com> wrote: > >> Sorry for the slow response on this. Unfortunately, we don't have a good >> canned example of b

[grpc-io] Re: Are any of the gRPC devs supporting C++?

2021-07-20 Thread 'Mark D. Roth' via grpc.io
Sorry for the slow response on this. Unfortunately, we don't have a good canned example of bidi streaming in C++. The background here is that we've been working on the new callback-based API, which we recently completed, and we didn't really want to publish a detailed example using the old

Re: [grpc-io] Re: gRFC L81: Custom Audience in JWT Access Credentials and Google Default Credentials

2021-06-30 Thread 'Mark D. Roth' via grpc.io
The update that Yihua mentioned is in https://github.com/grpc/proposal/pull/248. On Wed, Jun 30, 2021 at 10:37 AM 'yih...@google.com' via grpc.io < grpc-io@googlegroups.com> wrote: > The proposal is updated to support the inclusion of user-provided scope, > instead of audience in a JWT token.

[grpc-io] Re: C++ synchonous grpc server question

2021-05-19 Thread 'Mark D. Roth' via grpc.io
The gRPC server synchronous API in C++ has a thread pool to manage polling and request handling. The thread pool grows and shrinks as needed but always keeps some capacity around for new incoming requests that may show up at any time. The threads should go away when you shut down the server.

[grpc-io] gRFC A42: xDS Ring Hash LB Policy

2021-05-14 Thread 'Mark D. Roth' via grpc.io
I've created a gRFC for supporting the xDS Ring Hash Load Balancing Policy: https://github.com/grpc/proposal/pull/239 Comments welcome! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To

Re: [grpc-io] What is the purpose of two CQ pointers in C++ async interface?

2021-04-21 Thread 'Mark D. Roth' via grpc.io
On Wed, Apr 21, 2021 at 12:34 PM Mark Sandan wrote: > > > So we're currently working on changing to a polling model where the > event engine provides all polling threads and the application API is a > simpler callback-based model. > Is this the PR for the callback-based model you are referring

Re: [grpc-io] What is the purpose of two CQ pointers in C++ async interface?

2021-04-21 Thread 'Mark D. Roth' via grpc.io
A lot of the CQ-based API was designed around the idea that the application could tune performance by deciding which activity was going to occur on which CQ and then deciding which thread(s) were going to poll each CQ. So in the case you're asking about, the API allows using one CQ to get the new

Re: [grpc-io] ssl connection via proxy

2021-02-26 Thread 'Mark D. Roth' via grpc.io
Are you connecting via an HTTP CONNECT proxy? If so, you should be able to do this simply by setting the $grpc_proxy environment variable to point at your proxy. Zhen (CC'ed) can check your SSL creds code to make sure it looks right. On Thu, Feb 25, 2021 at 8:04 AM Yuriy Hashev wrote: > I

[grpc-io] Re: Test reports for gRPC

2021-02-26 Thread 'Mark D. Roth' via grpc.io
What sort of information are you looking for? We have funcationality tests that run as part of our repo. We also have some performance info at https://grpc.io/docs/guides/benchmarking/. On Tuesday, February 23, 2021 at 7:57:34 AM UTC-8 ancol...@hotmail.com wrote: > Hello we are implementing

[grpc-io] Re: gRFC L77: Core and C++ Third Party Identity Support for Call Credentials

2021-02-08 Thread 'Mark D. Roth' via grpc.io
FYI, the PR for this gRFC has moved to: https://github.com/grpc/proposal/pull/221 On Wednesday, January 27, 2021 at 2:01:36 PM UTC-8 Chuan Ren wrote: > Hi all, > > I've created a gRFC for gRPC support of the Third Party Identity Support > for Call Credentials. The proposal is here: > >

[grpc-io] gRFC A39: xDS HTTP Filters

2021-02-03 Thread 'Mark D. Roth' via grpc.io
I've created a gRFC for supporting xDS HTTP filters: https://github.com/grpc/proposal/pull/219 Feedback welcome, either in reply to this thread or on the PR. Thanks! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups

[grpc-io] gRFC A37: xDS Aggregate and Logical DNS Clusters

2021-01-26 Thread 'Mark D. Roth' via grpc.io
I've created a gRFC for gRPC support of the xDS Logical DNS Cluster type and the Aggregate Cluster extension. The proposal is here: https://github.com/grpc/proposal/pull/216 Comments welcome. -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are

Re: [grpc-io] Re: Is there have "Resolver" and "Balancer" interface in Python ?

2020-11-11 Thread 'Mark D. Roth' via grpc.io
There are two parts to this question: the resolver API and the LB policy API. I'll answer both separately. For the LB policy API: - We currently have no plan to ever expose the ability to implement new LB policies in wrapped languages. The reason for this is that the LB policy is on

Re: [grpc-io] [xds_client] pass channel arguments to xds server channel creation

2020-11-09 Thread 'Mark D. Roth' via grpc.io
If you're living with local hacks anyway, you may be able to use this internal function to set channel args: https://github.com/grpc/grpc/blob/4ac9c6f755463a2321f84b0cb2d631e1828faedb/src/core/ext/xds/xds_client.h#L325 To be clear, this is *not* a public API, and we do not promise not to break

Re: [grpc-io] [xds_client] pass channel arguments to xds server channel creation

2020-11-09 Thread 'Mark D. Roth' via grpc.io
On Mon, Nov 9, 2020 at 12:11 PM Yi-Shu Tai wrote: > Hey Mark, > > Thanks for the reply. > > Instead we want to disable the proxy. We somehow have HTTP_PROXY set in > some services. For the second question, I forgot to mention that we also > want to use custom certs for our infrastructure. > Can

Re: [grpc-io] [xds_client] pass channel arguments to xds server channel creation

2020-11-09 Thread 'Mark D. Roth' via grpc.io
We don't have a way to pass arbitrary channel args to the xDS channel today. Passing the grpc.enable_http_proxy arg probably isn't necessary, since you can instead use the $grpc_proxy environment variable to enable use of the proxy. Passing grpc.ssl_target_name_override wouldn't actually help

[grpc-io] Re: Updating Subchannels in client Application layer?

2020-10-28 Thread 'Mark D. Roth' via grpc.io
If I understand the scenario here correctly, each client has many different server names, and those server names change over time. What determines this list of server names for a given client to use, and how does the client get the list of server names to use? Can you just publish all of the

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
It's definitely something that we want to finish. I personally spent almost a year working on the C-core implementation, and it's mostly complete, but not quite enough to actually use yet -- there's still a bit of missing functionality to implement, and there are some design issues related to

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
As per discussion earlier in this thread, we haven't yet finished implementing the retry functionality, so it's not yet enabled by default. I believe that in Java, you may be able to use it, albeit with some caveats. Penn (CC'ed) can tell you what the current status is in Java. On Wed, Sep 30,

Re: [grpc-io] Re: gRPC A6: Retries

2020-09-30 Thread 'Mark D. Roth' via grpc.io
gRPC client channels will automatically reconnect to the server when the TCP connection fails. That has nothing to do with the retry feature, and it's not something you need to configure -- it will happen automatically. Now, if an individual request is already in-flight when the TCP connection

[grpc-io] gRPC-Core Release 1.32.0

2020-09-11 Thread 'Mark D. Roth' via grpc.io
This is 1.32.0 (giggle) release announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, Python, PHP and Ruby. Latest release notes are here . This release contains refinements, improvements, and bug fixes, with highlights

[grpc-io] Re: grpc++ any api to set socket bind options

2020-08-20 Thread 'Mark D. Roth' via grpc.io
Although you can use grpc_socket_mutator to do this, please note that it's not actually a public API, so we don't guarantee that the API won't change without notice between versions of gRPC. The internal API is at https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/socket_mutator.h .

Re: [grpc-io] gRPC programming guideline: When to convert status codes?

2020-08-19 Thread 'Mark D. Roth' via grpc.io
In general, we do not have a hard-and-fast rule about how implementation code handles this. It's up to the developers and code reviewers to make sure the overall behavior is correct. I don't see a problem with the presence of one function deep inside of the implementation returning a status that

Re: [grpc-io] gRPC programming guideline: When to convert status codes?

2020-08-19 Thread 'Mark D. Roth' via grpc.io
I don't actually see a problem here. The fact that some of these internal functions may use grpc_status_code to return their intent to the caller doesn't mean that that same code should be returned from gRPC to the application. As a hypothetical example (I haven't looked at the function and

Re: [grpc-io] A27: eds lb_endpoints load_balancing_weight support

2020-08-03 Thread 'Mark D. Roth' via grpc.io
I don't think there's any deadline for this. This is not a feature that grpc-team is planning to work on, so it's up to you and other OSS developers to contribute it if it's something you want. Unless someone beats you to it, I'd say go for it! On Mon, Aug 3, 2020 at 1:34 PM Yi-Shu Tai wrote:

Re: [grpc-io] A27: eds lb_endpoints load_balancing_weight support

2020-08-03 Thread 'Mark D. Roth' via grpc.io
We've talked about doing something like this. You will need a way to store the endpoint weight in the ServerAddress, but we don't want to add a top-level field for this. Instead, I've thrown together https://github.com/grpc/grpc/pull/23716 to give you a mechanism to do this. Also, note that

Re: [grpc-io] Re: gRPC A6: Retries

2020-07-24 Thread 'Mark D. Roth' via grpc.io
Unfortunately, nothing has changed here. At this point, the soonest we could get back to this would probably be sometime in Q2 next year. On Fri, Jul 24, 2020 at 7:54 AM wrote: > Mark, > > Are there any updates to this or does the latest post still stand? > > Thanks, > Nathan > > On Thursday,

Re: [grpc-io] Re: Pure C client

2020-07-22 Thread 'Mark D. Roth' via grpc.io
It's also worth noting that the C-core API is not really a public API, since it's aimed at language integrators, not at applications. In particular, we do not guarantee any backward compatibility in the C-core API; it may change in breaking ways from release to release. Currently, we do publish

[grpc-io] gRFC A30: xDS v3 Support

2020-06-25 Thread 'Mark D. Roth' via grpc.io
I've created a gRFC for xDS v3 support: https://github.com/grpc/proposal/pull/189 Comments welcome! -- Mark D. Roth Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving

Re: [grpc-io] Re: CreateInsecureChannelFromFd() - ownership of the given file descriptor.

2020-05-14 Thread 'Mark D. Roth' via grpc.io
What I said earlier is correct: once you pass an fd to CreateInsecureChannelFromFd(), you should not close it, because the channel has already taken ownership of it. But it sounds like the question you're actually asking here is about why the channel doesn't automatically reconnect. When you

[grpc-io] Re: How to delete Speech::Stub and grpc:Channel

2020-05-06 Thread 'Mark D. Roth' via grpc.io
The stub holds a ref to the channel, but every pending call also takes its own ref to the channel. So the channel won't be destroyed until you destroy m_pSpeechStub, reset or destroy channel, and complete any calls that you started using the stub. The channel takes a ref to the channel creds

[grpc-io] Re: Capture and return SSL errors to client

2020-05-06 Thread 'Mark D. Roth' via grpc.io
UNAVAILABLE is the right status code in this situation, but the error message returned along with that status code should provide more useful information. Fixing this will require some deep plumbing changes, so I don't know if it will happen anytime soon, but I have filed the following bug to

[grpc-io] Re: proposal - A6-client-retries.md - timeline

2020-05-06 Thread 'Mark D. Roth' via grpc.io
Unforunately, the retry work was never completed and has been on the back burner for the last couple of years. A large part of the functionality was implemented, but there's still a lot that needs to be done, and there are some integration issues with things like stats. I do want to get back

[grpc-io] Re: CreateInsecureChannelFromFd() - ownership of the given file descriptor.

2020-05-06 Thread 'Mark D. Roth' via grpc.io
gRPC takes ownership of the fd when you pass it to CreateInsecureChannelFromFd(), so you don't need to shut it down or close it. On Tuesday, May 5, 2020 at 4:33:23 AM UTC-7 krzyszt...@gmail.com wrote: > Hi, > I'm creating a grpc channel for the grpc client using > function

[grpc-io] Re: DNS-based failover of long-lived grpc connections

2020-05-06 Thread 'Mark D. Roth' via grpc.io
gRPC does not use DNS TTLs. However, it does re-resolve when the connection to the server is closed, so you can use server-side connection management to have the server periodically close the connection to force the

  1   2   >