Re: [grpc-io] Closing a stream with a long living context

2017-02-06 Thread Josh Humphries
On the client, the CloseSend method is "half-closing" the stream. So it
closes the request/upload half of the stream. The stream remains open until
the server closes the other half: the response/download part of the stream.
Cancelling the stream also closes it (as would the channel being
disconnected or the call timing out).



----

Josh Humphries

FullStory <https://www.fullstory.com/>  |  Atlanta, GA

Software Engineer

j...@fullstory.com

On Mon, Feb 6, 2017 at 10:29 AM, Michael Bond <kemperbond...@gmail.com>
wrote:

> Hey, trying to make sure I'm doing this correctly.
>
> Right now I'm having issues with closing streams started with a context
> that is passed around and exists for quite awhile.
>
> In this example "ctx" is passed around to many go routines, I want to keep
> "ctx" around but passing it to "grpcStream" seems to keep the stream from
> actually closing. What I did below fixed the issue but I wanted to know if
> it is needed to pass a child context and cancel it for the stream to
> actually close. Is CloseSend() not sufficient if the context is still alive?
>
> log.Println("stream starting")
> streamCtx, cancel := context.WithCancel(ctx)
> defer cancel()
> stream, err := grpcStream(streamCtx, otherArgs)
> if err != nil {
> errCh <- err
> return
> }
> defer stream.CloseSend()
> defer log.Println("closing stream")
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/85470940-69e3-4f4c-aed2-31a3242841a3%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/85470940-69e3-4f4c-aed2-31a3242841a3%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BJaF43Extu0gWFyq%2BXOZXPz6jT9Qp4bwFDqJGEsd%3Dx92w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Closing a stream with a long living context

2017-02-06 Thread Josh Humphries
Perhaps more helpful: in your code example, you would then consume the
responses by calling Recv() on the stream until it returns an error (io.EOF
on successful end of stream or some other error if the call fails). Even if
you are not expecting any response data from the server, you want to call
Recv() in order to learn the ultimate disposition of the call (did it
result in an error in the server or was it processed successfully?).

log.Println("stream starting")
streamCtx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := grpcStream(streamCtx, otherArgs)
if err != nil {
errCh <- err
return
}
defer stream.CloseSend()
defer log.Println("closing stream")
for {
msg, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
errCh <- err
    return
}
}




Josh Humphries

FullStory <https://www.fullstory.com/>  |  Atlanta, GA

Software Engineer

j...@fullstory.com

On Mon, Feb 6, 2017 at 10:36 AM, Josh Humphries <j...@fullstory.com> wrote:

> On the client, the CloseSend method is "half-closing" the stream. So it
> closes the request/upload half of the stream. The stream remains open until
> the server closes the other half: the response/download part of the stream.
> Cancelling the stream also closes it (as would the channel being
> disconnected or the call timing out).
>
>
>
> 
>
> Josh Humphries
>
> FullStory <https://www.fullstory.com/>  |  Atlanta, GA
>
> Software Engineer
>
> j...@fullstory.com
>
> On Mon, Feb 6, 2017 at 10:29 AM, Michael Bond <kemperbond...@gmail.com>
> wrote:
>
>> Hey, trying to make sure I'm doing this correctly.
>>
>> Right now I'm having issues with closing streams started with a context
>> that is passed around and exists for quite awhile.
>>
>> In this example "ctx" is passed around to many go routines, I want to
>> keep "ctx" around but passing it to "grpcStream" seems to keep the stream
>> from actually closing. What I did below fixed the issue but I wanted to
>> know if it is needed to pass a child context and cancel it for the stream
>> to actually close. Is CloseSend() not sufficient if the context is still
>> alive?
>>
>> log.Println("stream starting")
>> streamCtx, cancel := context.WithCancel(ctx)
>> defer cancel()
>> stream, err := grpcStream(streamCtx, otherArgs)
>> if err != nil {
>> errCh <- err
>> return
>> }
>> defer stream.CloseSend()
>> defer log.Println("closing stream")
>>
>> Thanks!
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To post to this group, send email to grpc-io@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/grpc-io/85470940-69e3-4f4c-aed2-31a3242841a3%40googlegroups.com
>> <https://groups.google.com/d/msgid/grpc-io/85470940-69e3-4f4c-aed2-31a3242841a3%40googlegroups.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2BBTGmH-yEe%3D0nP5gKCv9G9ZezY_Rap3%2BTRDPS0pyxuXQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Closing a stream with a long living context

2017-02-06 Thread Josh Humphries
On Mon, Feb 6, 2017 at 1:19 PM, Michael Bond <kemperbond...@gmail.com>
wrote:

> Thanks for the quick reply. Should of specified that the code in the
> original post is a snippet, there's receiving logic underneath it.
>
> Some more details surrounding this. In this case I have a callback
> function on the server (written in python) that needs to be executed to
> free resources, so closing the sending portion does not seems to trigger
> that. Also the client in this case dictates all connections. The server
> simply pours a stream of data to the client until the client no longer
> needs that particular data. So to be more specific with my question how
> would I fully close the stream from the client's side?
>

Not sure I follow 100%. But the python code should have similar logic where
it is receiving the request messages. When the client half-closes the
stream, the server would get EOF trying to receive (or if python APIs were
async/push, like the Java APIs are, you'd get an "end of stream"
notification). Is that where you are doing the clean up?



>
> On Monday, February 6, 2017 at 9:50:04 AM UTC-6, Josh Humphries wrote:
>>
>> Perhaps more helpful: in your code example, you would then consume the
>> responses by calling Recv() on the stream until it returns an error (io.EOF
>> on successful end of stream or some other error if the call fails). Even if
>> you are not expecting any response data from the server, you want to call
>> Recv() in order to learn the ultimate disposition of the call (did it
>> result in an error in the server or was it processed successfully?).
>>
>> log.Println("stream starting")
>> streamCtx, cancel := context.WithCancel(ctx)
>> defer cancel()
>> stream, err := grpcStream(streamCtx, otherArgs)
>> if err != nil {
>> errCh <- err
>> return
>> }
>> defer stream.CloseSend()
>> defer log.Println("closing stream")
>> for {
>> msg, err := stream.Recv()
>> if err == io.EOF {
>> break
>> }
>> if err != nil {
>> errCh <- err
>> return
>> }
>> }
>>
>>
>> 
>>
>> Josh Humphries
>>
>> FullStory <https://www.fullstory.com/>  |  Atlanta, GA
>>
>> Software Engineer
>>
>> j...@fullstory.com
>>
>> On Mon, Feb 6, 2017 at 10:36 AM, Josh Humphries <j...@fullstory.com>
>> wrote:
>>
>>> On the client, the CloseSend method is "half-closing" the stream. So it
>>> closes the request/upload half of the stream. The stream remains open until
>>> the server closes the other half: the response/download part of the stream.
>>> Cancelling the stream also closes it (as would the channel being
>>> disconnected or the call timing out).
>>>
>>>
>>>
>>> 
>>>
>>> Josh Humphries
>>>
>>> FullStory <https://www.fullstory.com/>  |  Atlanta, GA
>>>
>>> Software Engineer
>>>
>>> j...@fullstory.com
>>>
>>> On Mon, Feb 6, 2017 at 10:29 AM, Michael Bond <kemper...@gmail.com>
>>> wrote:
>>>
>>>> Hey, trying to make sure I'm doing this correctly.
>>>>
>>>> Right now I'm having issues with closing streams started with a context
>>>> that is passed around and exists for quite awhile.
>>>>
>>>> In this example "ctx" is passed around to many go routines, I want to
>>>> keep "ctx" around but passing it to "grpcStream" seems to keep the stream
>>>> from actually closing. What I did below fixed the issue but I wanted to
>>>> know if it is needed to pass a child context and cancel it for the stream
>>>> to actually close. Is CloseSend() not sufficient if the context is still
>>>> alive?
>>>>
>>>> log.Println("stream starting")
>>>> streamCtx, cancel := context.WithCancel(ctx)
>>>> defer cancel()
>>>> stream, err := grpcStream(streamCtx, otherArgs)
>>>> if err != nil {
>>>> errCh <- err
>>>> return
>>>> }
>>>> defer stream.CloseSend()
>>>> defer log.Println("closing stream")
>>>>
>>>> Thanks!
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "grpc.io" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to grpc

Re: [grpc-io] Closing a stream with a long living context

2017-02-06 Thread Josh Humphries
On Mon, Feb 6, 2017 at 1:49 PM, Michael Bond <kemperbond...@gmail.com>
wrote:

> This is a unary to stream, not stream to stream rpc, if that changes
> anything.
>

Do you mean server-streaming (e.g. unary request message, streaming
response)? If that is the case, calling stream.CloseSend() has no effect as
the stream is already closed. Take a look at the generated code for a
server-streaming method, and you'll see that the generated stub calls
SendMsg() and CloseSend() for you: https://github.com/grpc/grpc-go/blob/
883bfc7bc8feeb7d90501b977e1c23447b9ff136/test/grpc_testing/test.pb.go#L416


> So would calling CloseSend() send a message to the python server? Then
> would I just need to handle said message to make sure the callbacks are
> executed?
>

> Basic flow:
> Go client opens stream to python server with args
> Python server dumps back data to go client while the client reads it
> Go client no longer needs particular data and closes the stream
>

If the client no longer cares about the response stream and has already
finished sending request messages, then cancelling the context is the
appropriate action to take.


> Python server executes callback functions to clean up resources <- this is
> what currently wasn't happening with CloseSend() until I added a child
> context and cancelled it
>

I still don't quite understand how you've got this wired up. A code sample
of your Python server code might help. But, in any event, it sounds like
you genuinely need the client to cancel.

Another possibility could be to use bi-di streaming and then have the
server polling for request messages and do the clean up when you reach the
end of the request stream. But it sounds like that might only add needless
complexity to the server endpoint.


>
> On Monday, February 6, 2017 at 12:24:33 PM UTC-6, Josh Humphries wrote:
>
>> On Mon, Feb 6, 2017 at 1:19 PM, Michael Bond <kemper...@gmail.com> wrote:
>>
>>> Thanks for the quick reply. Should of specified that the code in the
>>> original post is a snippet, there's receiving logic underneath it.
>>>
>>> Some more details surrounding this. In this case I have a callback
>>> function on the server (written in python) that needs to be executed to
>>> free resources, so closing the sending portion does not seems to trigger
>>> that. Also the client in this case dictates all connections. The server
>>> simply pours a stream of data to the client until the client no longer
>>> needs that particular data. So to be more specific with my question how
>>> would I fully close the stream from the client's side?
>>>
>>
>> Not sure I follow 100%. But the python code should have similar logic
>> where it is receiving the request messages. When the client half-closes the
>> stream, the server would get EOF trying to receive (or if python APIs were
>> async/push, like the Java APIs are, you'd get an "end of stream"
>> notification). Is that where you are doing the clean up?
>>
>>
>>
>>>
>>> On Monday, February 6, 2017 at 9:50:04 AM UTC-6, Josh Humphries wrote:
>>>>
>>>> Perhaps more helpful: in your code example, you would then consume the
>>>> responses by calling Recv() on the stream until it returns an error (io.EOF
>>>> on successful end of stream or some other error if the call fails). Even if
>>>> you are not expecting any response data from the server, you want to call
>>>> Recv() in order to learn the ultimate disposition of the call (did it
>>>> result in an error in the server or was it processed successfully?).
>>>>
>>>> log.Println("stream starting")
>>>> streamCtx, cancel := context.WithCancel(ctx)
>>>> defer cancel()
>>>> stream, err := grpcStream(streamCtx, otherArgs)
>>>> if err != nil {
>>>> errCh <- err
>>>> return
>>>> }
>>>> defer stream.CloseSend()
>>>> defer log.Println("closing stream")
>>>> for {
>>>> msg, err := stream.Recv()
>>>> if err == io.EOF {
>>>> break
>>>> }
>>>> if err != nil {
>>>> errCh <- err
>>>> return
>>>> }
>>>> }
>>>>
>>>>
>>>> 
>>>>
>>>> Josh Humphries
>>>>
>>>> FullStory <https://www.fullstory.com/>  |  Atlanta, GA
>>>>
>>>> Software Engineer
>>>>
>>>> j...@fullstory.com
>>>>
>>>> On Mon, Feb 6, 2017 at 10:36 

Re: [grpc-io] Re: gRPC A6: Retries

2017-02-13 Thread Josh Humphries
On Sun, Feb 12, 2017 at 9:24 PM, 'Eric Gribkoff' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Hi Michael,
>
> Thanks for the feedback. Responses to your questions (and Josh's follow-up
> question on retry backoff times) are inline below.
>
> On Sat, Feb 11, 2017 at 1:57 PM, 'Michael Rose' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> A few questions:
>>
>> 1) Under this design, is it possible to add a load balancing constraints
>> for retried/hedged requests? Especially during hedging, I'd like to be able
>> to try a different server since the original server might be garbage
>> collecting or have otherwise collected a queue of requests such that a
>> retry/hedge to this server will not be very useful. Or, perhaps the key I'm
>> looking up lives on a specific subset of storage servers and therefore
>> should be balanced to that specific subset. While that's the domain of a LB
>> policy, what information will hedging/retries provide to the LB policy?
>>
>>
> We are not supporting explicit load balancing constraints for retries. The
> retry attempt or hedged RPC will be re-resolved through the load-balancer,
> so it's up to the service owner to ensure that this has a low-likelihood of
> issuing the request to the same backend. This is part of a decision to keep
> the retry design as simple as possible while satisfying the majority of use
> cases. If your load-balancing policy has a high likelihood of sending
> requests to the same server each time, hedging (and to some extent retries)
> will be less useful regardless. There will be metadata attached to the call
> indicating that it's a retry, but it won't include information about which
> servers the previous requests went to.
>
>
>
>> 2) "Clients cannot override retry policy set by the service config." --
>> is this intended for inside Google? How about gRPC users outside of Google
>> which don't use the DNS mechanism to push configuration? It seems like
>> having a client override for retry/hedging policy is pragmatic.
>>
>>
> In general, we don't want to support client specification of retry
> policies. The necessary information about what methods are safe to retry or
> hedge, the potential for increased load, etc., are really decisions that
> should be left to the service owner. The retry policy will definitely be a
> part of the service config. While there are still some security-related
> discussions about the exact delivery mechanism for the service config and
> retry policies, I think your concern here  should be part of the service
> config design discussion rather than something specific to retry support.
>
>
>> 3) Retry backoff time -- if I'm reading it right, it will always retry in
>> random(0, current_backoff) milliseconds. What's your feeling on this vs. a
>> retry w/ configurable jitter parameter (e.x. linear 1000ms increase w/ 10%
>> jitter). Is it OK if there's no minimum backoff?
>>
>>
> You are reading the backoff time correctly. There are a number of ways of
> doing this, (see https://www.awsarchitectureblog.com/2015/03/backoff.html)
> but choosing between random(0, current_backoff) is done intentionally and
> should generally give the best results. We do not want a configurable
> "jitter" parameter. Empirically, the retries should have more varied
> backoff time, and we also do not want to let service owners specify very
> low values for jitter (e.g., 1% or even 0), as this would cluster all
> retries tightly together and further contribute to server overloading.
>

In that case, perhaps it should be random(0, 2*current_backoff) so that the
mean is the targeted backoff (with effectively 100% jitter). Otherwise,
documentation will need to be very clear that the actual expected value for
backoff is 1/2 of any configured values.


>
> Best,
>
> Eric Gribkoff
>
>
> Regards,
>> Michael
>>
>> On Friday, February 10, 2017 at 5:31:01 PM UTC-7, ncte...@google.com
>> wrote:
>>>
>>> I've created a gRFC describing the design and implementation plan for
>>> gRPC Retries.
>>>
>>> Take a look at the gRPC on Github
>>> .
>>>
>>
>> *CONFIDENTIALITY NOTICE: This email message, and any documents, files or
>> previous e-mail messages attached to it is for the sole use of the intended
>> recipient(s) and may contain confidential and privileged information. Any
>> unauthorized review, use, disclosure or distribution is prohibited. If you
>> are not the intended recipient, please contact the sender by reply email
>> and destroy all copies of the original message.*
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to grpc-io+unsubscr...@googlegroups.com.
>> To post to this group, send email to grpc-io@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> 

[grpc-io] grpc-go: client for service reflection

2017-02-16 Thread Josh Humphries
I noticed there is a server-side implementation
<https://github.com/grpc/grpc-go/blob/d0c32ee6a441117d49856d6120ca9552af413ee0/reflection/serverreflection.go>
in Go for service reflection
<https://github.com/grpc/grpc-go/blob/d0c32ee6a441117d49856d6120ca9552af413ee0/reflection/grpc_reflection_v1alpha/reflection.proto>.
But a client is notably absent. Using a generated stub is inconvenient for
numerous reasons, not least among them being that the generated API of raw
descriptor protos is rather unwieldy.

I've written such a client
<https://github.com/jhump/protoreflect/blob/4df185295ba66e94f4fd8e8f60f6a34be0cf875b/grpcreflect/clientreflection.go>
and was wondering if it would be considered a useful addition to the core
grpc-go repo. It provides a richer descriptor type, similar to what is done
in Java and C++ protobuf implementations, to make the returned schema
easier to use.


(I've also asked the protobuf mailing list if the descriptors would be a
welcome contribution to the Go protobuf library. Although I don't see my
message in Google Groups -- anyone know if that group is moderated?)

----
*Josh Humphries*
jh...@bluegosling.com

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLPc5siL964t3%2Ba02qjmBEE0Se8yPqa6hvd%2B0Wqj69nhA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: http2 reverse proxy:how to reduce connections with backend server(s)?

2016-12-01 Thread Josh Humphries
On Thu, Dec 1, 2016 at 12:26 PM, killjason  wrote:

> In my case:
> 1.The backend servers represent a same micro-service(may be contains 10
> machines) serving the same APIs.
> 2.The proxy is based on pure Netty(no gRPC included). but clients and
> backend servers are developed on gRPC.
> I am not sure if one Netty channel can represents multiple connections to
> multiple backends?
>

If you are using Netty as a layer-4 proxy, then it cannot. But if you use
it as a layer-7 proxy, using the HTTP/2 protocol handlers, you can. When a
client initiates a new stream, you pick a backend, find a channel to that
backend, and create a new stream on that backend channel. You will
effectively have a map of incoming channel & stream ID -> outgoing channel
& stream ID and use that to proxy frames. This works for gRPC, but would
probably be insufficient for general-purpose HTTP/2 where the servers
initiate streams (since the proxy won't be able to know which client the
server-initiated stream was intended). Admittedly, there will probably be
some implementation complexity in properly managing HTTP/2 flow control
windows on both sides while avoiding excessive resource usage/buffering.


>
> 在 2016年12月2日星期五 UTC+8上午1:11:03,Carl Mastrangelo写道:
>>
>> This depends on how homogeneous your backends are.  For example, if your
>> proxy going to the same logical set of backends each time, (even if they
>> are distinct machines) then yes this is possible with gRPC.  In gRPC, a
>> channel represent a higher level concept than a single client.  It
>> represents multiple connections to multiple backends (a.k.a. Servers).
>>
>> In your case, it seems like you should build a map of hostname (a.k.a.
>> "target") to Channel and pick the correct channel to serve requests to in
>> your proxy.   This works well if you are handling a small number
>> hostnames.  Each channel will have its own tcp connections, but there will
>> be few total channels.
>>
>> You can do more advanced things too with your host name.  If the backends
>> that you send traffic to route requests based on the host name, but each
>> backend can handle the requests of other host names, then you can reduce
>> the number of connections even further.  For example, if you know that
>> foo.mydomain.com and bar.mydomain.com both physically point to the same
>> set of servers, then they can both share the same channel.  In your
>> channel, you can override the "authority" field but still reuse the same
>> connection.
>>
>>
>> We can provide a better answer if you could share a little more detail
>> about what you want to do.
>>
>> On Thursday, December 1, 2016 at 8:59:45 AM UTC-8, killjason wrote:
>>>
>>> (moved from: https://github.com/grpc/grpc-java/issues/2470)
>>>
>>> Imagine there are 10k grpc-clients, they established 10k http2
>>> connections(TCP-connections) with the http2 reverse proxy; then http2
>>> reverse proxy create 10k http2 connections(TCP-connections) to the
>>> origin(backend) server.
>>> Is it possible to reduce the 10k connections between proxy and
>>> origin(backend) server?
>>> for example, can a connection pool be used in reverse proxy to reduce
>>> connections with backend server?
>>> This picture can explain better:
>>> [image: image]
>>> this picture is in Nginx blog, Is it possible to do the same thing to
>>> reduce connections with backend serevrs using http2-reverse proxy?
>>>
>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/dd155eb2-c351-49a0-8687-dffa035c3f2b%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BK9yfsWuAjmZNHYRg1TZgHc3y7JBULJjz%3DJ-4dKwKPMog%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] golang: stubs backed by an interface instead of a concrete object

2016-12-16 Thread Josh Humphries
I've seen the idea proposed more than once that the generated stubs be
backed by an interface -- something along the lines of a channel
<https://github.com/grpc/grpc-java/blob/master/core/src/main/java/io/grpc/Channel.java>.
Most recently, it was during discussion of client interceptors
<https://github.com/grpc/grpc-go/issues/240>. It's also come up as a way of
doing in-process dispatch <https://github.com/grpc/grpc-go/issues/247>
without having to go through the network stack and (much more importantly)
serialization and deserialization.

There have been objections to the idea, and I just wanted to understand the
rationale behind them. I have a few ideas as to what the arguments for the
current approach might be. Is it one or more of these? Or are there other
arguments tat I am overlooking or nuance/detail I missed in the bullets
below?

   1. *Surface Area*. The main argument I can think of is that the API
   isn't yet sufficiently mature to lock down the interface now. So exporting
   only a single concrete type to be used by stubs makes the API surface area
   smaller, allowing more flexibility in changes later. To me, this implies
   that introduction of such an interface is an option in the future. (I don't
   particularly agree with this argument since the interface surface area
   could have been exposed *instead of* the existing grpc.Invoke and
   grpc.NewClientStream methods.)
   2. *Overhead*. It could be argued that the level of indirection
   introduced by the use of an interface could be too much overhead. I'd
   really like to see a benchmark that shows this if this is the case. It
   seems hard to imagine that a single interface vtable-dispatch would be
   measurable overhead considering what else happens in the course of a call.
   (Perhaps my imagination is broke...)
   3. *Complexity*. I suppose it might be argued that introducing another
   type, such as a channel interface, complicates the library and the existing
   flow. I happen to *strongly* disagree with such an argument. I think the
   interface could be added in a fairly painless way that would still support
   older generated code. This was described in this document
   
<https://docs.google.com/document/d/1weUMpVfXO2isThsbHU8_AWTjUetHdoFe6ziW0n5ukVg/edit#>.
   But were this part of the objection, I'd like to hear more.


For context: I have some ideas I want to build for other kinds of stubs --
like providing special stubs that make batch streaming calls look like just
issuing a bunch of unary RPCs, or for making a single bidi-stream
conversation resemble a sequence of normal service calls (for some other
service) that happen to be pinned to the same stream.

All of these currently require* non-trivial code generation* -- either
specialized to the use, or I just provide my own interface-based dispatch
and build all of these things on top of that. But it feels like a
fundamental hole in the existing APIs that I cannot do this already.

The Java implementation has a layered architecture with Stubs on top,
Transports on the bottom, and Channel in-between. The Go implementation
exposes nothing along the lines of channel, instead combining it with the
transport into a single clientConn. This is incredibly limiting.

**
*Josh Humphries*
Software Engineer
*j...@fullstory.com <j...@fullstory.com>*

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLG21J4ULLKkRvgtuFaUeK0kA4zeM5js1WYyVhbex7beQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Finding list of available grpc methods in an api

2017-04-13 Thread Josh Humphries
If you are using Go, I've written a library that provides a better client
API than just the streaming method on the generated service stub:
github.com/jhump/protoreflect/grpcreflect
<https://godoc.org/github.com/jhump/protoreflect/grpcreflect>

(It also speaks in terms of *desc.Descriptor types defined in that same repo
<https://godoc.org/github.com/jhump/protoreflect/desc>, which is a much
richer and more useful API than the raw *DescriptorProto messages defined
in github.com/golang/protobuf/protoc-gen-go/descriptor
<https://godoc.org/github.com/golang/protobuf/protoc-gen-go/descriptor>)

----
*Josh Humphries*
jh...@bluegosling.com

On Thu, Apr 13, 2017 at 2:23 PM, 'Carl Mastrangelo' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Yes, if the server has Server Reflection turned on.  It is currently off
> be default.  It requires that the API use protobuf.  You can send an RPC to
> the /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo
> method.  It is defined in reflection.proto in each of the repositories.
>
> On Thursday, April 13, 2017 at 10:41:58 AM UTC-7, Constantine wrote:
>>
>> Hi !
>>
>> Is there any way to find out about the list of grpc methods available by
>> an API?
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/8a34a484-b80f-47af-9f56-f3205a55bc0c%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/8a34a484-b80f-47af-9f56-f3205a55bc0c%40googlegroups.com?utm_medium=email_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BJQtSh%3DtzASNFgPqLzz8Ymnsb%2Ba4y00Pirt7zX07aLO6w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: (gRPC-java) Why are all services singletons?

2017-03-01 Thread Josh Humphries
I think this is referring to the fact that you bind a single server object
for the life of the GRPC server.
Go: https://github.com/grpc/grpc-go/blob/master/server.go#L276
Java: https://github.com/grpc/grpc-java/blob/master/
compiler/src/testLite/golden/TestService.java.txt#L167

So it's not singleton in a traditional pattern sense -- e.g. global/static
singleton. But it is a singleton within the scope of a GRPC server.

This question has come up before. I think, in the past, it has been asked
that URL prefixes could be used to route requests for the same service to
different instances. For example, POST to
"/service1/my.package.MyService/MyMethod"
invokes myMethod on some server instance A, and
"/service2/my.package.MyService/MyMethod"
invokes it for a different instance.

I think the justification in the past has been that this would complicate
the protocol as targeting specific implementations of the same service
suddenly requires new behavior in both clients and servers. Instead, the
recommended pattern is to use metadata (e.g. a header) and have an
aggregate implementation re-dispatch to another implementation based on
incoming metadata.



*Josh Humphries*
jh...@bluegosling.com

On Wed, Mar 1, 2017 at 2:36 PM, 'Carl Mastrangelo' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Have you actually tried this?  Can you include an error showing that this
> is not possible?
>
> On Monday, February 27, 2017 at 4:49:42 PM UTC-8, Ryan Michela wrote:
>>
>> Each server can only reference one instance of a service implementation
>> for the lifetime of the service, and all requests to that service are
>> routed concurrently to that single, shared instance, correct?
>>
>> On Monday, February 27, 2017 at 4:39:26 PM UTC-8, Carl Mastrangelo wrote:
>>>
>>> No?  I don't know where you could have got that impression but you can
>>> make as many as you like, and share them between Servers as you please.
>>>
>>> On Monday, February 27, 2017 at 3:51:57 PM UTC-8, Ryan Michela wrote:
>>>>
>>>> I mean the instance of the class that implements my service operations.
>>>> The instance you pass to ServerBuilder.addService().
>>>>
>>>> Isn't that instance a singleton from the perspective of gRPC?
>>>>
>>>> On Monday, February 27, 2017 at 12:48:41 PM UTC-8, Carl Mastrangelo
>>>> wrote:
>>>>>
>>>>> What do you mean by Service?   There are hardly any places in our code
>>>>> where something is a singleton.
>>>>>
>>>>> On Saturday, February 25, 2017 at 10:31:59 PM UTC-8, Ryan Michela
>>>>> wrote:
>>>>>>
>>>>>> I'd like to know the design rationale for why gRPC services
>>>>>> implementations are all concurrently executing singletons. There are many
>>>>>> possible instancing and threading modes that could have been used.
>>>>>>
>>>>>>- Singleton instancing
>>>>>>- Per-call instancing
>>>>>>- Per-session instancing
>>>>>>
>>>>>>
>>>>>>- Concurrent execution
>>>>>>- Sequential execution
>>>>>>
>>>>>> Concurrent singletons make sense from an absolute throughput angle -
>>>>>> no object instantiation or blocking. But concurrent singletons are 
>>>>>> hardest
>>>>>> for developers to work with - service implementors must be keenly aware 
>>>>>> of
>>>>>> shared state and mult-threading concerns.
>>>>>>
>>>>>>1. Why was concurrent singleton chosen as the only out-of-the-box
>>>>>>way to implement gRPC (java) services?
>>>>>>2. Would API for supporting other threading and instancing modes
>>>>>>be accepted in a PR?
>>>>>>
>>>>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/grpc-io/50eb23e0-4092-40f6-9f87-e5fb1a6251e2%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/50eb23e0-4092-40f6-9f87-e5fb1a6251e2%40googlegroups.com?utm_medium=email_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKQEiE67i_vFHLAuJ3KoLrWG22McoufMyGEke07zyqexA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] [grpc-java] Get custom method option from interceptors

2017-07-19 Thread Josh Humphries
I'm guessing you are interested in doing this with Java (since I see
"[grpc-java]" in the subject).

You can query the grpc ServiceDescriptor for its "schema descriptor". For
proto-generated services, this will be a ProtoFileDescriptorSupplier, that
provides access to the FileDescriptor, which in turn contains everything
you're looking for.

The slight trick is that the interceptor, with each call, only sees a
MethodDescriptor for the method being invoked, which does not appear to
have a reference back to its enclosing ServiceDescriptor. So you'd need to
construct a "sidecar" map that allows access to the service descriptors
from the interceptor(s).

Here's a gist that demonstrates this:
https://gist.github.com/jhump/e8f67087ec5a3918f7b270a4a2b83516 (the sidecar
map is the MethodOptionsRegistry).

(Word of warning: I have not tried to actually compile and run the gist, so
you may find some small errors. But it should at least be instructive in
showing everything you actually need.)




*Josh Humphries*
jh...@bluegosling.com

On Tue, Jul 18, 2017 at 1:40 PM, ran.bi via grpc.io <
grpc-io@googlegroups.com> wrote:

> What's the best way to get custom method option data (
> https://developers.google.com/protocol-buffers/docs/proto3#custom_options)
> from server interceptors?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/8756e7c5-a164-4ad3-b4d9-0d4f7b5d2239%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/8756e7c5-a164-4ad3-b4d9-0d4f7b5d2239%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLX_ogZUoZsfDcKoiTueF%3D8JSs%3DaLT616kT3kuNOAQx8g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Sending headers on every message with bidirectional streaming?

2017-09-22 Thread Josh Humphries
Headers are per stream, not per message. The whole stream is a single HTTP
round-trip. So the first thing the server sends back are response headers.
Then the response payload (which consists of zero or more messages).
Finally, you get trailers.

To include metadata for each response message, you'll have to encode that
into your RPC schema -- e.g. add a map<string,string> field to your
response message (or whatever suits your need, the less stringly-typed
likely the better). Then your server can stuff that metadata into each
response message.



*Josh Humphries*
jh...@bluegosling.com

On Fri, Sep 22, 2017 at 2:36 PM, <thas.hi...@gmail.com> wrote:

>
> So to send header metadata with on each call we can use interceptors. Ex:
> https://github.com/grpc/grpc-java/blob/166108a9438c22d06eb3b371b5ad34
> a75e14787c/examples/src/main/java/io/grpc/examples/header/
> HeaderClientInterceptor.java
>
> However, for bidirectional streaming case for a given stub/channel, the
> headers will only be sent once (via start() call). Is there a way to have
> every message after to also have the headers sent?
>
> I see that there is a sendMessage https://github.com/grpc/grpc-java/blob/
> 166108a9438c22d06eb3b371b5ad34a75e14787c/core/src/main/java/
> io/grpc/ServerCall.java#L128 call, but it only takes in the request
> message as a parameter. So i'm wondering if there's a way to send other
> parameters in the header on each subsequent message call in the stream.
>
> Of course, I could add the parameters to the message body, but that could
> lead to a pretty large .proto definition.
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/32099406-cbbb-4e43-9821-9128faa34205%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/32099406-cbbb-4e43-9821-9128faa34205%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKi7%3D8QWS9o4X0qPgRab10%3Dt-z6gtidmLVnqQ-n%2BuHV7g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] A few confusing questions about the grpc stream

2017-08-26 Thread Josh Humphries
On Sat, Aug 26, 2017 at 2:09 AM, yihao yang  wrote:

> Hi,
>
> I have some confusing questions about the grpc stream. Hope some one can
> help.
>
>1. For example of sync stream. Does the client/server side send() mean
>the message is sent out to the network card? Is it possible that the sent
>out msg get lost and the sender don't know it?
>
> HTTP/2 supports flow control to effectively limit the rate of sending
messages. This is handled transparently in GRPC. This differs from language
to language. Go will block until the message is sent. Java (whose APIs are
all async/non-blocking) will buffer the message in the sender side in
memory, but exposes optional API for interacting with flow control (which
allows code to effectively wait until the receiver is ready and the message
can actually be sent).

>
>1. Is it possible that a send() failed but the receiver receives the
>message later on?
>
> No. When send() fails, it is because the stream is broken. The stream can
no longer be used.

>
>1. When a send() or recv() failed, is it ok to re-issue the send() and
>recv() function and will the following send()s and recv()s succeed?
>
> No. After a streaming method fails, the stream is broken. To re-try, the
caller must re-issue the stream invocation and potentially re-send (and
re-receive) all messages in the stream. The actual logic will depend on the
spec (or implementation details) of the server's stream handler -- what it
does when a stream is aborted with the messages it has already received,
etc.

>
>1. Is it possible that the recv() hangs and actually the sender side
>has network partition with the receiver? Does stream have a timeout?
>
> Yes, this is possible. I think many runtimes now support "keep alives"
(which can use HTTP/2 ping frames vs. TCP keep-alive packets) to detect
this condition. Also, a stream can have a timeout, but the timeout is not
for a single send or receive operation but for the entire stream (e.g. a
timeout of 10 seconds means the entire streaming operation must complete
and close in 10 seconds or else it will be aborted with a "deadline
exceeded" error).

>
>1. How grpc::channel detect the connection_state change?
>
> I don't know the C++ API; I use Go and Java. So I'm afraid I don't know
this one. Go has no such APIs. Java's is very different. I am guessing the
connection state is related to actual state of a network connection, and
this is known/detected based on a simple state machine for the connection
with transitions based on results of network calls.


> I am using both grpc-go and grpc-c++. I think they may have some
> differences on the behaviors.
>

While the APIs are different (blocking vs. async, error-handling, features
exposed) and take advantage of different language features, the overall
semantics for GRPC should be more-or-less the same across platforms.


>
> Thanks,
> Yihao
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/c5972d4e-c004-4976-8762-db1520cd1749%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2BZQfk9Pst78h7vk7BksLMzkDk0nmsj2qtKUE6n6YxmBg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: How to configure nginx to serve as a load balancer for gRPC?

2017-08-31 Thread Josh Humphries
You could use nginx as a TCP load balancer (layer 4), instead of HTTP
(layer 7). However, the actual load balancing performance will likely be
much worse, especially if clients are using long-lived persistent
connections without any sort of client-side load balancing logic (like
opening multiple connections and using some scheme, like round-robin, to
fan out requests to those connections).

I think there may even be a way to combine TCP load balancing with TLS
termination, though I think there was an issue relating to ALPN (used to
negotiate http/2 protocol during TLS handshake). Here's a related thread:
https://groups.google.com/d/topic/grpc-io/mPcCdVEo-fM/discussion



*Josh Humphries*
jh...@bluegosling.com

On Thu, Aug 31, 2017 at 12:06 AM, Osman Ali <osman.lx...@gmail.com> wrote:

> Nginx currently doesn't send http2 to your upstream location. You would be
> sending http 1.1 after nginx terminates.
>
> You can use other options:
>
> https://github.com/lyft/envoy
>
> https://nghttp2.org/
>
> On Tuesday, August 29, 2017 at 10:54:08 AM UTC-7, alexm...@gmail.com
> wrote:
>>
>>
>> I understand that the question is more appropriate for nginx group, but
>> still... Does anyone have a _working_ nginx.conf file that does the job?
>> I ended up with 404 from nginx sending gRPC requests (yes, valid
>> requests, verified) with the following nginx.conf:
>>
>> events {
>>   worker_connections  4096;  ## Default: 1024
>> }
>>
>> http {
>>   upstream ip-10-100-30-92 {
>> server ip-10-100-30-147:50101;
>> server ip-10-100-130-12:50101;
>>   }
>>
>>   server {
>> listen 50101;
>> server_name ip-10-100-30-92;
>> location / {
>>   proxy_pass http://ip-10-100-30-92;
>> }
>>   }
>> }
>>
>>
>> This file produces 404 response and a line in /var/log/nginx/access.log:
>>
>> 192.168.13.238 - - [29/Aug/2017:16:56:34 +] "PRI * HTTP/2.0" 400 173
>> "-" "-"
>>
>>
>> Actually, I try to use SSL, and a _working_ example of nginx.conf would
>> be _really_ appreciated.
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/bb919b89-1923-45b9-9afc-02eba0ac0399%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/bb919b89-1923-45b9-9afc-02eba0ac0399%40googlegroups.com?utm_medium=email_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKoRAjLtRaGKM62FazJPGHyZjN-%2Bh-49qmdGEUzexZrdA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Trying to push a small change to the remote repo for grpc-java

2017-11-28 Thread Josh Humphries
Hi, Sonya,
Only committers can actually push changes to the main repo. To contribute,
you need to fork the repo and create a pull request. After the pull request
is reviewed, if accepted, it will be merged into the repo. If you are
trying to push to a personal feature/bugfix branch, you should push to your
own fork instead.

See this article for more info on the typical flow for contributing to
open-source projects in Github:
https://gist.github.com/Chaser324/ce0505fbed06b947d962




*Josh Humphries*
jh...@bluegosling.com

On Sun, Nov 26, 2017 at 7:30 PM, Sonya <sonyakc.2...@gmail.com> wrote:

> Hi GRPC IO team,
>
> I'm trying to push a small change to a branch I created but authentication
> keeps failing. I don't see anything in the README about any additional
> authentication required and I already accepted the contributor license
> agreement as shared here
> <https://github.com/grpc/grpc-java/blob/master/CONTRIBUTING.md>
>
> Pls advise if there's any additional setup I need to do.
>
> Thanks
>
> --
>
> Sonya K Chhabra
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/CABXs%3DyKzT_oawUaH%2Bx4shJxo8QS8phTjHzDfbYBR3zawg
> XrXKQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/grpc-io/CABXs%3DyKzT_oawUaH%2Bx4shJxo8QS8phTjHzDfbYBR3zawgXrXKQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BJYTp2E1kU8eF3SQUw_%3DzUBpW33d%3D7Hkth7RoVw9YXmkA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] RPC level flow control support in grpc for golang

2017-10-26 Thread Josh Humphries
Since Go's API is synchronous/blocking, I am pretty sure that sender always
block until message is actually sent, which will respect HTTP/2 flow
control windows.

If you want an async API, where a sender can queue up messages even before
the receiver can accept them, you could push them into a buffered channel
and have another goroutine that is de-queueing from the channel and writing
to the receiver as allowed by flow control.

Java is different because its API is completely async and non-blocking. So
backpressure requires more sophistication in the app code.


*Josh Humphries*
jh...@bluegosling.com

On Wed, Oct 25, 2017 at 8:21 AM, <elda...@gmail.com> wrote:

>
> Hi,
>
> Is it possible to perform manual flow control in the RPC call level in
> grpc-go?
>
> I'm looking for something like the flow control mechanism of the grpc
> library for java:
> https://github.com/grpc/grpc-java/tree/master/examples/src/
> main/java/io/grpc/examples/manualflowcontrol
>
> I would like my clients/servers to have the option to specify for the
> other side whatever they are ready to receive new messages or not.
>
> I looked for this functionality in grpc-go documentation and the source
> code but only found option to tune the window size of the http2 transport
> layer.
> Is this functionality implemented in grpc-go?
>
> Best,
> Eldad.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/c2b0653a-3ba5-4fd6-9164-dd1432c2af16%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/c2b0653a-3ba5-4fd6-9164-dd1432c2af16%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2BJPgEbd7a3OT3AiV3emasJQ0sKWpVcBny7GV89J-bCtQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] client and server stub not generated grpc-go

2018-01-04 Thread Josh Humphries
For go, you must enable the grpc plugin. This is done via prefix to the
--go_out parameter:

protoc --go_out=*plugins=grpc:*. grpc_tester.proto



*Josh Humphries*
jh...@bluegosling.com

On Thu, Jan 4, 2018 at 2:52 PM, Amandeep Gautam <amandeepgaut...@gmail.com>
wrote:

> I am trying to write a grpc server in go but am unable get the generated
> client and server stub in the file generated by plugin.
> Here is the paste of the file generated: https://pastebin.com/kfi99MxK
>
> From what I have researched, it is because of faulty protobuf installation
> but I am not sure what is exactly wrong and how to debug the root cause.
>
> Any help is appreciated.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/7af9bed0-85f9-400e-bd4e-ce65655ed465%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/7af9bed0-85f9-400e-bd4e-ce65655ed465%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BK8DpU095kjBx9O%3D6DKfN54tm0qd2ppWFkBXQMr%3DJMipw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Getting "all SubConns are in TransientFailure" sending to local grpc service.

2017-12-22 Thread Josh Humphries
Hi, Ravi,
Yes, I understand. That is because grpc.Dial doesn't actually return an
error just because there are issues establishing socket connections -- it
asynchronously starts a client that will transparently retry dialing as
needed (possibly continuously dialing, with some backoff, depending on the
nature of the connection failure).

While you can try to use dial options grpc.WithBlock() and
grpc.FailOnNonTempDialError(true), in my experience this still usually
results in only a timeout error from grpc.Dial. In order to get visibility
into the actual errors, you need a custom dialer that also performs the TLS
handshake so that you can adequately capture the error (log it or
otherwise). This will likely shed much light on why all connections are
always in transient failure state.




*Josh Humphries*
jh...@bluegosling.com

On Fri, Dec 22, 2017 at 8:56 PM, Ravi Jonnadula <rav...@gmail.com> wrote:

> Hi Josh,
>
> Thanks for sharing your thoughts.
>
> In my case, grpc.Dial is successful, there is no error for this call.
> The error occurs when the rpc call is invoked.
>
>
> On Fri, Dec 22, 2017 at 3:42 PM, Josh Humphries <jh...@bluegosling.com>
> wrote:
>
>> If you use a custom dialer, specify the "insecure" dial option in the
>> GRPC client, but then handle TLS in your custom dialer, you can get at the
>> actual error messages that are causing the transport failure.
>>
>> Here's an example I used in a command-line tool, where I wanted to be
>> able to show users a good error message when there was a TLS issue
>> preventing things from working:
>> https://github.com/fullstorydev/grpcurl/blob/master/grpcurl.go#L916
>>
>> I've considered filing a bug with the grpc-go project about this. The
>> ClientConn has information about the actual errors that cause the SubConn
>> transient failure, but provide no API to access it (like for logging/error
>> reporting): https://github.com/grpc/grpc-go/blob/master/clie
>> ntconn.go#L989.
>>
>>
>>
>> 
>> *Josh Humphries*
>> jh...@bluegosling.com
>>
>> On Fri, Dec 22, 2017 at 2:08 PM, Ravi <rav...@gmail.com> wrote:
>>
>>> Hi Yufeng,
>>>
>>> My server side code exactly like yours.
>>> My certificates and keys are fine, because when I plug them into example
>>> route_guide code (grpc-go/examples/route_guide) they work.
>>>
>>> My server-client logic is also fine without certificates. The moment I
>>> enable certificates, I get this error:
>>> rpc error: code = Unavailable desc = all SubConns are in
>>> TransientFailure
>>>
>>>
>>> My Server side code:
>>>
>>> lis, err := net.Listen("tcp", port)
>>> if err != nil {
>>> return fmt.Errorf("Failed to listen: %s", err)
>>> }
>>> creds, err := credentials.NewServerTLSFromFile(certFile, keyFile)
>>> if err != nil {
>>> return fmt.Errorf("could not load keys: %s", err)
>>> }
>>>
>>> opts := []grpc.ServerOption{grpc.Creds(creds)}
>>> grpcServer := grpc.NewServer(opts...)
>>>
>>> pb.RegisterHelloServer(grpcServer, newServer())
>>>
>>> if err := grpcServer.Serve(lis); err != nil {
>>> return fmt.Errorf("Failed to start Hello Server: %s", err)
>>> }
>>>
>>>
>>> My Client side code:
>>> 
>>>
>>>
>>> creds, err := credentials.NewClientTLSFromFile(certFile, "")
>>> if err != nil {
>>> log.Fatalf("could not load cert: %s", err)
>>> }
>>> conn, err = grpc.Dial(port, grpc.WithTransportCredentials(creds))
>>> if err != nil {
>>> log.Fatalf("Failed to connect to server: %s", err)
>>> return
>>> }
>>>
>>> defer conn.Close()
>>> c := pb.NewHelloClient(conn)
>>>
>>> r, err := c.HelloServer(context.Background(), {Name:
>>> "Myname", Id:10})
>>>
>>>
>>>
>>>
>>> On Thursday, December 21, 2017 at 7:18:11 PM UTC-8, Yufeng Liu wrote:
>>>>
>>>> Hi Ravijo,
>>>>
>>>> I have fixed the issue, I just change the service code below. The cert
>>>> is bought normal cert from “https://www.rapidssl.com/“.
>>>>
>>>> certificate, err := credentials.NewServerTLSFromFile(conf.CRT,
>>>> conf.KEY)
>>>> if err != nil {
>>>>

Re: [grpc-io] Compose or nest a service in another service?

2018-01-27 Thread Josh Humphries
The question could be asked about either protobuf or gRPC with different
answers.

The protobuf IDL that does *not* allow combining/composing services this
way. A single service can only enumerate methods. You cannot nest services
inside others.

However, gRPC does support such composition -- in a way -- by letting you
expose multiple services from a single server. In fact, you could have a
single server object that implements all of the protobuf service
interfaces. So you are effectively using an implementation language (not
protobuf) to compose the services, and then exposing all of the interfaces
from a single gRPC server.



*Josh Humphries*
jh...@bluegosling.com

On Sat, Jan 27, 2018 at 3:22 AM, Thomas Sörensen <sorense...@gmail.com>
wrote:

> Not sure if this is gRPC question or a protocol buffer questions but I try
> here.
>
> I know that you can compose or nest messages in other messages and import
> messages from other .proto files and I wonder if it is possible to do the
> same for the service definition? If it is not possible now is it something
> that is planned to support in the future?
>
> I read that is possible in Apache Thrift so perhaps you have had any
> discussions on about supporting that?
>
> Best regards
> Thomas
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/035d13e9-5b99-470f-87e2-2c67725d64ec%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/035d13e9-5b99-470f-87e2-2c67725d64ec%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2Bgd6P7goM96Ozmd5zWmGHEeAyAt82j32xzw6ih7mXr2w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] grpc Dial behavior

2018-02-22 Thread Josh Humphries
It is a persistent connection. But if you only have one backend (or, more
importantly, one hostname, such as behind a hardware load balancer and/or
proxy), the client does not create redundant connections. So there is some
downtime while it re-creates a socket connection after it gets disconnected.

This downtime is usually short, so you can usually get by using a
grpc.FailFast(false) call option. The default is fail-fast, which means the
RPC fails if a connection is not available. But with that setting false, it
will wait for the connection to become available. You should definitely use
a timeout when setting fail-fast to false, so it doesn't wait too long for
a connection to recover.

You set a timeout via the context, as you would for other I/O that should
be deadline-driven or cancellable.

You can use an interceptor to set the timeout for all calls that do not
already have a timeout (e.g. apply a default, so that you don't have to
specify explicit timeouts everywhere in code). The interceptor can also add
the fail-fast call option to every call, so you don't have to do that
explicitly everywhere, too.



*Josh Humphries*
jh...@bluegosling.com

On Thu, Feb 22, 2018 at 1:57 PM, <amit.chan...@gmail.com> wrote:

> Hi,
> I have a question regarding the grpc Dial behavior. I have a
> server, which as part of the incoming request needs to talk to another
> endpoint using grpc. Currently, on the server spawn, it does grpc.Dial to
> the other endpoint. and when the request comes, it does a grpc on this
> established connection. Two questions:
>
> 1. Is the connection via grpc.Dial persistent?
> 2. On the connection loss to the other endpoint, my grpc requests are
> failing
> err rpc error: code = Unavailable desc = all SubConns are in
> TransientFailure
> Do i need to dial out per request, that sounds expensive as the connection
> establishment can take time. I was under the impression that Dial will
> indefinitely try to establish the connection. Do i need to explicitly turn
> on keepAlive to make that happen?
>
> 3. Also, if i want to limit how long the grpc request should take, one way
> i know of is via the golang context. I was reading somewhere that the grpc
> call itself, you can pass timeout. Which method is preferred?
>
> Thanks
> Amit
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/c499-7a3c-4548-a5ca-028c03d7cc32%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/c499-7a3c-4548-a5ca-028c03d7cc32%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2B7cHVV5U60SpA8oyTOG32cQ4wCLDdVcC2ikioNwsWFsg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] protobuf-2.5.0/bin/protoc doesn't recognize "stream"?

2018-02-15 Thread Josh Humphries
The stream keyword was added in protobuf 3.0. (Version 2.5 is quite old.)
Support for map types and proto3 syntax (which comes with a handful of
language restrictions and semantic changes for messages) were also added in
3.0.


*Josh Humphries*
jh...@bluegosling.com

On Thu, Feb 15, 2018 at 4:50 PM, Yanpeng Wu <pengfl...@gmail.com> wrote:

> *helloworld.proto*
>
> syntax = "proto2";
> package helloworld;
>
> // The greeting service definition.
> service Greeter {
>   // Sends a greeting
>   rpc SayHello (HelloRequest) returns (stream HelloReply) {}
> }
>
> // The request message containing the user's name.
> message HelloRequest {
>   optional string name = 1;
> }
>
> // The response message containing the greetings
> message HelloReply {
>   optional string message = 1;
> }
>
>
> $../protobuf-2.5.0/bin/protoc -I. --cpp_out=. helloworld.proto
> $ helloworld.proto:7:47: Expected ")".
>
> Furthermore, are there any harms to incorporate grpc-1.8.4 with
> protobuf-2.5.0?
>
> Thanks!
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/8d7dd32c-0d64-4087-918f-62e8a7b93dbc%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/8d7dd32c-0d64-4087-918f-62e8a7b93dbc%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2BWL6gJOBQJM9Atg7L55tFeg0Zvd0_K-SkPivKB1HdHJg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Getting "all SubConns are in TransientFailure" sending to local grpc service.

2017-12-22 Thread Josh Humphries
If you use a custom dialer, specify the "insecure" dial option in the GRPC
client, but then handle TLS in your custom dialer, you can get at the
actual error messages that are causing the transport failure.

Here's an example I used in a command-line tool, where I wanted to be able
to show users a good error message when there was a TLS issue preventing
things from working:
https://github.com/fullstorydev/grpcurl/blob/master/grpcurl.go#L916

I've considered filing a bug with the grpc-go project about this. The
ClientConn has information about the actual errors that cause the SubConn
transient failure, but provide no API to access it (like for logging/error
reporting): https://github.com/grpc/grpc-go/blob/master/clientconn.go#L989.



----
*Josh Humphries*
jh...@bluegosling.com

On Fri, Dec 22, 2017 at 2:08 PM, Ravi <rav...@gmail.com> wrote:

> Hi Yufeng,
>
> My server side code exactly like yours.
> My certificates and keys are fine, because when I plug them into example
> route_guide code (grpc-go/examples/route_guide) they work.
>
> My server-client logic is also fine without certificates. The moment I
> enable certificates, I get this error:
> rpc error: code = Unavailable desc = all SubConns are in TransientFailure
>
>
> My Server side code:
>
> lis, err := net.Listen("tcp", port)
> if err != nil {
> return fmt.Errorf("Failed to listen: %s", err)
> }
> creds, err := credentials.NewServerTLSFromFile(certFile, keyFile)
> if err != nil {
> return fmt.Errorf("could not load keys: %s", err)
> }
>
> opts := []grpc.ServerOption{grpc.Creds(creds)}
> grpcServer := grpc.NewServer(opts...)
>
> pb.RegisterHelloServer(grpcServer, newServer())
>
> if err := grpcServer.Serve(lis); err != nil {
> return fmt.Errorf("Failed to start Hello Server: %s", err)
> }
>
>
> My Client side code:
> 
>
>
> creds, err := credentials.NewClientTLSFromFile(certFile, "")
> if err != nil {
> log.Fatalf("could not load cert: %s", err)
> }
> conn, err = grpc.Dial(port, grpc.WithTransportCredentials(creds))
> if err != nil {
> log.Fatalf("Failed to connect to server: %s", err)
> return
> }
>
> defer conn.Close()
> c := pb.NewHelloClient(conn)
>
> r, err := c.HelloServer(context.Background(), {Name:
> "Myname", Id:10})
>
>
>
>
> On Thursday, December 21, 2017 at 7:18:11 PM UTC-8, Yufeng Liu wrote:
>>
>> Hi Ravijo,
>>
>> I have fixed the issue, I just change the service code below. The cert is
>> bought normal cert from “https://www.rapidssl.com/“.
>>
>> certificate, err := credentials.NewServerTLSFromFile(conf.CRT, conf.KEY)
>> if err != nil {
>> log.Errorf("could not load server key pair: %s", err)
>> }
>>
>> I don’t know that can help you anything.
>>
>>
>> On 22 Dec 2017, at 10:57 AM, rav...@gmail.com wrote:
>>
>> How to fix / debug such issue?
>>
>> I keep getting this error:
>> rpc error: code = Unavailable desc = all SubConns are in TransientFailure
>>
>> The same client - server logic works fine if I remove the TLS credentials
>> ... any help to resolve would be appreciated!
>>
>>
>> On Monday, December 4, 2017 at 5:44:41 AM UTC-8, Paul Breslin wrote:
>>>
>>> We didn't really solve it but discovered a work-around. For some reason
>>> if I start my services in one script and then run the tests from a separate
>>> script it seems to work fine. So it may have to do with some extra delay
>>> time between starting the containers and then attempting to run the client
>>> code.
>>>
>>>
>>> On Monday, December 4, 2017 at 7:22:22 AM UTC-5, yuf...@chope.co wrote:
>>>>
>>>> Hi Paul,
>>>>
>>>> can i ask did you have solved the issue. i have the same problem..
>>>>
>>>> On Tuesday, November 14, 2017 at 5:36:19 AM UTC+8, Paul Breslin wrote:
>>>>>
>>>>>
>>>>> I'm running local grpc services under Docker for Mac. All has been
>>>>> fine but today I started getting intermittent failures:
>>>>> rpc error: code = Unavailable desc = all SubConns are in
>>>>> TransientFailure
>>>>> when my test code sends a message to one of the services. The test
>>>>> code also runs inside a docker container.
>>>>>
>>>>> Sometime restarting the docker daemon would make this go away but for
&g

Re: [grpc-io] Handle multi-client streaming on the server

2018-08-05 Thread Josh Humphries
Streams from multiple clients do not get intermingled. The server handler
is invoked for each stream, and the handler gets a reference to the stream,
for consuming events from just that one client.

Also, a single client's requests always arrive in order. gRPC is built on
top of TCP, which handles reliable delivery and ordering of network packets.


*Josh Humphries*
jh...@bluegosling.com



On Sat, Aug 4, 2018 at 10:07 AM  wrote:

> I am setting up a grpc service that will upload images in chunks to the
> server. What I am trying to understand in trying to setup the server logic
> is how the grpc service handles multiple clients streaming at the same time.
>
> I assume that streaming events will collide meaning client 1 starts
> uploading an image, the grpc server gets the events and starts saving the
> image to file. Then client 2 starts uploading and client 2's upload
> requests will be mixed in with client 1's upload requests.
>
> How do you handle out of order uploads on the server so that the image
> data does not get mixed up with the wrong file?
>
> Also while streaming can a single clients requests come in out of order
> too?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/72da92e3-09ea-4db9-9ff4-3f2aa21428bb%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/72da92e3-09ea-4db9-9ff4-3f2aa21428bb%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLGf8mQ6BqurPwaVkOOAMrD5bp2RP9Fcg%2BBC_rRqyoOvQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] How to intercept every client request and add some attributes to it and collect at the server end

2018-08-05 Thread Josh Humphries
I think several of the languages support the interceptor pattern (I am
quite familiar with the Java and Go runtime libraries, which do). This
allows you to register a client interceptor that will get to see every RPC.
(Intercepting streaming RPCs in Go is a bit more complicated due to having
a different interface than the interceptor for unary RPCs.)

When you have cross-cutting attributes to associate with every RPC,
metadata is probably the way to go. So the interceptor could add the
attributes you mention as request metadata. For Go, you'd probably need to
have this data stored in a context.Context, which the interceptor will
query and then store in request metadata. For Java, there is also a context
type, but it uses thread-local storage (so it can be easier to interact
with and does not require you to explicitly pass the context to/through
every function).


*Josh Humphries*
jh...@bluegosling.com



On Sun, Aug 5, 2018 at 4:11 AM shailendra kumar 
wrote:

>
> Whenever client call to server, i want to add some attribute like account,
> accountHolder, accountLocation  and their values along with client request.
> At the server side, i want to collect these info. Please suggest for grpc
> call as well as rest call
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/3e68e3d6-74d2-4082-90b0-5361bd4238d0%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/3e68e3d6-74d2-4082-90b0-5361bd4238d0%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2BrfODyK5-Bfn5p0HZToibFKBLPvx-Nt6a5un5%3DYsO19A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] How to intercept every client request and add some attributes to it and collect at the server end

2018-08-06 Thread Josh Humphries
On Mon, Aug 6, 2018 at 3:13 PM shailendra kumar 
wrote:

> Thanks Josh.
> I tried with ClientInterceptor and ServerInterceptor.   Its working fine.
> I have implemented in java.
> Will this interceptor work for http request through postman ?
>

I assume you are using grpc-gateway (or something very similar) in front of
your service, to proxy JSON+HTTP 1.1 to gRPC. Is that correct? If so, yes,
the interceptor will be invoked for these requests.


> e.g -  for path /v1/health
> will these interceptor invoke the request ??
>

Sorry, I don't understand the question. If you are asking whether the
interceptor will be invoked for these requests, see previous answer. If
not, do you mind re-wording, perhaps adding a little more detail?

> rpc healthCheck(google.protobuf.Empty) returns (HealthCheckResponse) {
> option (google.api.http) = {
> get: "/v1/health"
> };
> }
>
>
> On Sun, Aug 5, 2018 at 5:23 PM, Josh Humphries 
> wrote:
>
>> I think several of the languages support the interceptor pattern (I am
>> quite familiar with the Java and Go runtime libraries, which do). This
>> allows you to register a client interceptor that will get to see every RPC.
>> (Intercepting streaming RPCs in Go is a bit more complicated due to having
>> a different interface than the interceptor for unary RPCs.)
>>
>> When you have cross-cutting attributes to associate with every RPC,
>> metadata is probably the way to go. So the interceptor could add the
>> attributes you mention as request metadata. For Go, you'd probably need to
>> have this data stored in a context.Context, which the interceptor will
>> query and then store in request metadata. For Java, there is also a context
>> type, but it uses thread-local storage (so it can be easier to interact
>> with and does not require you to explicitly pass the context to/through
>> every function).
>>
>> 
>> *Josh Humphries*
>> jh...@bluegosling.com
>>
>>
>>
>> On Sun, Aug 5, 2018 at 4:11 AM shailendra kumar 
>> wrote:
>>
>>>
>>> Whenever client call to server, i want to add some attribute like
>>> account, accountHolder, accountLocation  and their values along with client
>>> request. At the server side, i want to collect these info. Please suggest
>>> for grpc call as well as rest call
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+unsubscr...@googlegroups.com.
>>> To post to this group, send email to grpc-io@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/3e68e3d6-74d2-4082-90b0-5361bd4238d0%40googlegroups.com
>>> <https://groups.google.com/d/msgid/grpc-io/3e68e3d6-74d2-4082-90b0-5361bd4238d0%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BL-WPkVP23o3zaxT%3Dr9NwHB73PthdMKT2auszXtk1BWzw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Some questions after seeing the Grpc concepts...

2018-03-29 Thread Josh Humphries
On Thu, Mar 29, 2018 at 3:07 AM, Benjamin Krämer 
wrote:

> Hi, nice that you have a look at gRPC. I will answer your questions one by
> one.
>
>>
>>- RXJS seems like a perfect library to build into this - specifically
>>because it supports returning 1 or a stream of something. Would also 
>> handle
>>the timeout case with RX's built in primitives too. How would one go about
>>modifying a code generator (such as the Node/Browser code) to use RX?
>>And/or is that easy? What language and build environment do I need to get
>>involved in?
>>
>>  You would have to write a plugin for protoc in C. You can use any of the
> existing generators for whatever language you need as a template. Those are
> located here: https://github.com/grpc/grpc/tree/master/src/compiler
>
>>
>>- It mentions HTTP/2 as the transport protocol, which requires SSL,
>>which can lead to issues when you don't have a signed cert. What are the
>>steps to go through when you don't have a cert for localhost development?
>>
>> You can either choose to disable TLS (grpc.credentials.createInsecure())
> or you could create self-signed certs and use those. A good overview gives
> you the official documentation: https://grpc.io/docs/guides/
> auth.html#nodejs
> If you need help on how to create self-signed certs, you can have a look
> at this example for C#. The cert creation process is the same:
> https://stackoverflow.com/a/37739265/3865064
> Just use localhost for %COMPUTERNAME% and %CLIENT-COMPUTERNAME%.
>
>>
>>- If anyone has experience with Vert.x you may know about something
>>called the Event Bus. The event bus lets you connect many peers that all
>>add to the global pool of available microservices... is there anything
>>equivalent to this in GRPC? For example if I have 10 git repos that each
>>add 4 or 5 rpc services, can you connect to a service by ONE main URL or
>>does each URL need to be configured separately? (is there a way to create 
>> a
>>global even bus that the services live on?) Vert.x also provides
>>round-robin features when the same service is deployed to multiple hosts.
>>
>>  I'm sorry but I don't know about Vert.x, but I hope someone else could
> answer this question for you.
>

I am not familiar with Vert.X or its Event Bus, but from the way you
describe it, I believe the answer is "yes". Services in gRPC are different
from servers. A single server can expose many services. When you create a
server, you register one or more services with it. That would be the way to
contribute multiple "microservices" to the same server (at one URL). On the
client, they are easily configured as one URL by creating a single client
connection to that one server. Then you can create different stubs for each
exposed service. Stubs wrap a client connection.

So a service is an interface (set of methods, defined in a proto file). And
a client connection is logical connection to a server (logical because it
could actually be multiple physical connections, like in a case where the
service is load balanced across multiple backend replicas). The two are
orthogonal.


> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/3e95f225-0dc7-4a0b-b2fc-076c753ee85f%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLn-neLnFx%3DcQS7Bqzv5-1Eb61Xrm4t2qUpy%3DJT21p35Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [go-nuts] go-grpc question

2018-10-17 Thread Josh Humphries
*+grpc-io@googlegroups.com *

*moving golang-n...@googlegroups.com  to BCC*

In general, connections are not cheap, but stubs are. Actual
implementations for some languages differ, but Go complies with this.

What that means is that, generally speaking, you should not try creating
the *grpc.ClientConn for each request. Instead create it once and cache it.
You *can* create the stub just once and cache it (they are safe to use
concurrently form multiple goroutines). But that is not necessary; you
could also create the stub for each request, using the cached connection.

In practice, creating a new connection for each request will have overhead
in terms of allocations, creating and tearing down goroutines, and also in
terms of latency, to establish a new network connection every time. So it
is advisable to cache and re-use them. However, if you are not using TLS,
it *may be* acceptable to create a new connection per request (since the
network connection latency is often low, at least if the client and server
are in the same region/cloud provider). If you are using TLS, however,
creating a connection per request is a bit of an atrocity: you are not only
adding the extra latency of a TLS handshake to every request (typically 10s
of milliseconds IIRC), but you are also inducing a potentially huge amount
of load on the server, by making it perform many more digital signatures
(one of the handshake steps) than if the clients cached and re-used
connections.

Historically, the only reason it might be useful to create a new connection
per request in Go was if you were using a layer-4(TCP) load balancer. In
that case, the standard DNS resolver would resolve to a single IP address
(that of the load balancer) and then only maintain a single connection.
This would result in very poor load balancing since 100% of that client's
requests would all route to the same backend. This would also happen when
using standard Kubernetes services (when using gRPC for server-to-serve
communication), as kubedns resolves a service name into a single virtual
IP. I'm not sure if the current state of the world regarding TCP load
balancers and the grpc-go project, but if it's still an issue and you run
services in Kubernetes, you can use a 3rd party resolver:
https://github.com/sercand/kuberesolver.


*Josh Humphries*
jh...@bluegosling.com


On Wed, Oct 17, 2018 at 2:13 AM  wrote:

> Hello,
>
> I intend to use grpc between two fixed endpoints (client and server) where
> the client receives multiple requests (the client serves as a proxy) which
> in turn sends a grpc request to the server. I wanted to know of the
> following would be considered good practice:
>
> a) For every request that comes in at the client, do the following in the
> http handler:
>a) conn := grpc.Dial(...)// establish a grpc connection
>b) client := NewClient(conn)// instantiate a new client
>c) client.Something(..) // invoke the grpc method on
> the client
>
> i.e Establish a new connection and client in handling every request
>
> b) Establish a single grpc connection between client and server at init()
> time and then inside the handler, instantiate a new client and invoke the
> grpc method
>a) client := NewClient(conn)// instantiate a new client
>b) client.Something(..) // invoke the grpc method on
> the client
>
> c) Establish a connection and instantiate a client at init() and then in
> every handler, just invoke the grpc method.
>a) client.Something(..)
>
> The emphasis here is on performance as I expect the the client to process
> a large volume of requests coming in. I do know that grpc underneath
> creates streams but at the end of the day a single
> logical grpc connection runs on a single TCP connection (multiplexing the
> streams) on it and having just one connection for all clients might not cut
> it. Thoughts and ideas appreciated !
>
> Thanks,
> Nakul
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKKAwnDk3FLGe7U3E%3Dpk6p1JVhuAFJERdocGhf8tzFq_g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Is it availble to do RPC requests from server to client with gRPC?

2018-11-01 Thread Josh Humphries
On Thu, Nov 1, 2018 at 8:16 AM  wrote:

>
>
> четверг, 1 ноября 2018 г., 16:02:25 UTC+5 пользователь Ivan Penchev
> написал:
>>
>> The only way to do this is, for the client first to contact the server.
>>
>> Then on the server you get an IServerStreamWriter, which gives you an
>> option to steam to the client .
>>
>> The reason you can do it the other way is, that the server must know all
>> the IP addresses to connect to :)
>>
>>
>> It is ok, if client should connect to server before any requests from
> server.
> But if i understand correctly,  IServerStreamWriter is one-direction
> streaming from server to client.
> But how to organize rpc requests (request-response model) from server to
> client, if we already have established connection from client
>

You can use a bi-directional stream, where each message that the server
sends on a stream will have a single message from the client in reply. The
actual request and response types can both have a oneof: the *response* message
has an option in the oneof for each actual RPC operation (and its request
type); the *request* message (sent by the client) would actually define the
response types. You can see an example of something like this in the server
reflection service
.
However, that is a normal client-to-server sort of RPC. You'd just swap the
request and response types (and have the server be the one that initiates
requests by sending a "response" message first, and getting a "request" in
reply).

For graceful tear-down, the server will simply have to stop using the
stream for sending requests, wait for replies to all outstanding
operations, and then close the stream/terminate the RPC. When the client is
shutting down, it could stop accepting messages from the server (after
every one received, send an immediate error message along the lines of
"shutting down").

If you want to support out of order execution (e.g. client replies can be
sent in different order than server requests were received), then the
request and response schemas will need to have an envelope with some sort
of request ID, to correlate replies with their original request.


Aside: I've been working on something you may find useful -- a generic
mechanism for tunneling gRPC. The tunnel is setup via a gRPC service that
provides the same functionality as a low-level gRPC transport. There are a
few interesting use cases, but the server-to-client one is the most
interesting IMO. If you're curious, you can poke around at the
work-in-progress: https://github.com/jhump/grpctunnel. (Unfortunately, I
don't know when I'll get around to really finishing this library, but it
may have some interesting ideas/code you could use.)


> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/f76852bf-4768-4a75-8bfd-8c174b8626db%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKqmAkvktXRbmMSYgQ9qozeY%3D7WgQtJtzBUX9hiB_f7kQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] a Nodejs version of GRPCurl ??

2018-09-25 Thread Josh Humphries
On Tue, Sep 25, 2018 at 9:49 AM books  wrote:

> prefer nodejs flexibility of programming without compilation of code,
> need to talk to a GRPC Server written in Go, met some problems with
> this createSsl call,
> https://grpc.io/grpc/node/grpc.credentials.html#.createSsl__anchor
>
> with grpcurl I can debug with the GRPC Server (written in Go), using
> these cacert, cert, key, and insecure the grpcurl is able to talk with
> this GRPC Server written in Go, but with nodejs grpc.credentials
> either createSsl or createInsecure call it's confused, always saying
> bad cert,
>
> I am thinking
> 1) if anyone seen a Nodejs version of GRPCurl, I may learn some code from
> there,
> https://github.com/fullstorydev/grpcurl


I don't know of a Node.js version of gRPCurl, but I do know of some other
dynamic gRPC stuff written in Node.js. @konsumer on GitHub has done a lot
of stuff with this. A quick scan of those repos reveals this:
https://github.com/konsumer/grpc-dynamic-gateway
It's not gRPCurl, but it does use dynamic gRPC, so you might glean what you
need from the source for it.


> 2) what's the terminology mapping?  between Go code uses
> cacert/cert/key  to the Nodejs uses  root_cert, private_key,
> cert_chain ?
>

root_cert == cacert
This is one or more certificates for trusted "root certificate
authorities"

cert == cert_chain
This is the full certificate that the server presents, including its
public key as well as the full chain of trust (e.g. certificate
issuer/authority signatures)

key == private_key
This is the private key that corresponds to the public key in the
server's certificate


>
>createSsl( [root_certs] [, private_key] [, cert_chain])
> Create an SSL Credentials object. If using a client-side certificate,
> both the second and third arguments must be passed. Additional peer
> verification options can be passed in the fourth argument as described
> below.
>
> Parameters:
> Name Type Argument Description
> root_certs Buffer 
> The root certificate data
>
> private_key Buffer 
> The client certificate private key, if applicable
>
> cert_chain Buffer 
> The client certificate cert chain, if applicable
>
> verify_options.checkServerIdentity function
> Optional callback receiving the expected hostname and peer certificate
> for additional verification. The callback should return an Error if
> verification fails and otherwise return undefined.
>
> 3) the nodejs code is calling npm package `@grpc/proto-loader` to load
> *.proto but GRPCurl support protoset binary version of protobuf
> definition as plaintext *.proto as well, wonder if Nodejs GRPC has
> similar binary protoset support?
>
> https://github.com/grpc/grpc/blob/v1.15.0/examples/node/dynamic_codegen/route_guide/route_guide_client.js
>
> $ grpcurl --help
>
>   -cacert string
> File containing trusted root certificates for verifying the server.
> Ignored if -insecure is specified.
>   -cert string
> File containing client certificate (public key), to present to the
> server. Not valid with -plaintext option. Must also
> provide -key option.
>   -key string
> File containing client private key, to present to the server. Not
> valid
> with -plaintext option. Must also provide -cert option.
>   -insecure
> Skip server certificate and domain verification. (NOT SECURE!). Not
> valid with -plaintext option.
>
>   -protoset value
> The name of a file containing an encoded FileDescriptorSet. This
> file's
> contents will be used to determine the RPC schema
> instead of querying
> for it from the remote server via the GRPC reflection
> API. When set: the
> 'list' action lists the services found in the given
> descriptors (vs.
> those exposed by the remote server), and the
> 'describe' action describes
> symbols found in the given descriptors. May specify
> more than one via
> multiple -protoset flags. It is an error to use both
> -protoset and
> -proto flags.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CAJctwx4kO-fxRGRNjap%3D23vCmvfqO5DBF25G8qKhbfRjbQHjZQ%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group 

Re: [grpc-io] Proposal: descriptor (.pb) to .proto files

2019-01-06 Thread Josh Humphries
FWIW, I have a Go implementation of the same functionality here:
https://godoc.org/github.com/jhump/protoreflect/desc/protoprint


*Josh Humphries*
jh...@bluegosling.com


On Sun, Jan 6, 2019 at 7:48 AM Alex Van Boxel 
wrote:

> Hi,
>
> I'm currently prototyping a Descriptor to .proto files dumper. I'm
> wondering if it would be something that would be of interest to include in
> the java-utils part of *grpc-java*?
>
> We're planning to use it for dynamically generated schema's, that we then
> dump on a filesystem to check into git.
>
> (it's work in progress, but already generates pretty complete proto files)
>
> https://github.com/anemos-io/metastore/blob/master/server/src/main/java/io/anemos/metastore/ProtoFileWriter.java
>
> If this is something that could be included I'll do the effort of getting
> an environment up that is able to build grpc-java.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/2a4fdf8d-b5f6-40cd-bb00-a99533ae4e08%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/2a4fdf8d-b5f6-40cd-bb00-a99533ae4e08%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BJHpHrjPv%2BRqRhh8wa4VVHqC1AT30NYeJ2F0uhBK023cQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Does gRPC use only http2? tcpdump from a particular client does not show it as http2

2018-11-29 Thread Josh Humphries
The main gRPC libraries *only* use HTTP/2. As you saw, they negotiate the
same protocol during NPN step of TLS handshake: "h2". It is more likely
that whatever analysis tool you used in the first case did not recognize
"h2" as the HTTP/2 protocol, so treated it as an unknown application
protocol.


*Josh Humphries*
jh...@bluegosling.com


On Thu, Nov 29, 2018 at 1:42 AM  wrote:

> Thanks for the prompt response.
> We use Python grpcio 1.0.0.
> No as i mentioned, for now version will not be updated as Network device i
> am talking about is already deployed in customer networks.
> We have to make our application work with this for now.
>
> My question is more towards,
> What does grpc rely on to for http2 capability? (any tool in os
> environment or http2 capability is inbuilt in grpc?)
> Why same version in another Ubuntu VM used http2, where as this specific
> env, it did not use http2?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/ff3c3be4-e3e0-4f3d-af4e-73595f2018e0%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/ff3c3be4-e3e0-4f3d-af4e-73595f2018e0%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2B%2B1CtXWeo8qdht3GunEKKLXZD%2BYqo7RmDAv8yjBqSkSWw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Client Loadbalancing with Kubernetes SVC

2018-09-18 Thread Josh Humphries
Unless things have changed recently, the default kubedns result for a
standard k8s service will have a single IP: the virtual IP of the service.
This in turn causes the gRPC client to configure just a single socket
connection and route all requests on the one socket. Since kubeproxy load
balancing through that virtual IP is layer 4, this can result in very poor
load balancing, especially if you have unbalanced clients (e.g. a small
number of clients that constitute the majority of RPC traffic).

If you use "headless" services in k8s on the other hand, the DNS query
should return multiple pod IPs. This causes the gRPC client to in turn
maintain multiple connections, one to each destination pod. However, I am
not sure how responsive this will be to topology changes (as pod instances
are auto-scaled, or killed and rescheduled, they will move around and
change IP address). It would require disabling DNS caching and making sure
the service host name is resolved/polled regularly.

Another solution is to use a custom gRPC resolver that talks to the k8s API
to watch the service topology and convey the results to the gRPC client.
For Go, this is implemented in an open-source package:
github.com/sercand/kuberesolver

(Most of my experience is with Go. So your mileage may vary if using a
runtime other than Go. But I think the various implementations largely
behave quite similarly.)
----
*Josh Humphries*
jh...@bluegosling.com


On Tue, Sep 18, 2018 at 7:10 AM  wrote:

> Hello,
>
> Does it make sense to have client loadbalancer with gRPC when we are using
> gRPC server in a Kubernetes cluster?
> Because client will dial a service DNS and will always retrieve IP of
> service and not IP of Pods behind it.
>
> NB: If already seen this blog entry
> https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
>
> Thanks
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/63490220-692a-49fd-84bd-62ef13b19e09%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/63490220-692a-49fd-84bd-62ef13b19e09%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLoC4Pdy50tuhdfZ_qO%2BoEB3LWX%3Dg4DE_-_ODb%2BR_M5fA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Where is the ServiceConfig proto definition?

2019-05-16 Thread Josh Humphries
The doc on service configs include phrases like so:

// The format of the value is that of the 'Duration' type defined here:
// https://developers.google.com/protocol-buffers/docs/proto3#json

This makes it apparent that somewhere there is a canonical version of this
structure defined as a proto.

The gRFC for health checks makes it even more obvious:

We will then add the following top-level field to the ServiceConfig proto:

HealthCheckConfig health_check_config = 4;

However, I can find no definition for a ServiceConfig proto anywhere. I've
looked in a few Google and gRPC repos, including the google.rpc package in
the googleapis repo (
https://github.com/googleapis/googleapis/tree/master/google/rpc).

Furthermore, the Go runtime (maybe others?) use an unstructured JSON blob
for providing this configuration, when a much better API would allow for
providing an actual structured type (such as the Go structs generated from
the ServiceConfig proto).

I'm currently working on a package (not open-source, at least not yet) to
make it easy to configure services and have them expose their own configs
directly via an RPC interface (so instead of having to use DNS or other
service discovery mechanisms, a client can just ask the server for its
config via an RPC). That means I am creating a proto representation of it.
But it would be nice if I could just lean on some standard, canonical
definition of the proto instead.


*Josh Humphries*
jh...@bluegosling.com

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLTTHeN1X5r0f0LDTob0Stz%3D88JUVdAqkiJdoD%3D7jfyTA%40mail.gmail.com.


Re: [grpc-io] Sending a single huge message in one unary call vs sending chunks of messages with streaming

2020-05-21 Thread Josh Humphries
On Thu, May 21, 2020 at 11:06 PM  wrote:

> Thx Josh for the reply.
>
> let me clarify
>
> 1. I'm sending 100 MB+ string as bytes type with protobuf, so what the
> server side is doing in the streaming case is, preallocates a size of 100
> MB+ (size info provided from the first streaming message) and keep
> appending the broken up bytes sent through streaming rpc to it, after
> collecting and appending all the bytes, it will respond. For unary, once it
> receives the whole string, it will respond. So I would rather say the
> server is more like a no op than it is preprocessing the data.
>
> 2. It is not a load test setting, both client and server are sync with
> single thread. No compression is used.
>
> "Depending on the runtime implementation, there could be an advantage just
> due to pipelining: it's possible that your handler thread is handling
> unmarshalling of a message in parallel with a framework thread handling I/O
> and decoding the wire protocol. Whereas with a unary call, it's all handled
> sequentially."
>
> This sounds very interesting to me, I would like to see where it actually
> does this pipelining, do you have any reference code? Thx Josh!
>

There's no explicit pipelining. It's the fact that your handler code is
started as soon as the headers are received. And when it asks for the next
message in the stream, it may handle unmarshaling. I don't think Java does
it this way, but Go does. With the generated Go stream clients, your
handler goroutine receiving a message is where the actual protobuf
unmarshaling happens, which can run concurrently with the gRPC framework
goroutines, which may be handling decoding of subsequent frames in the
HTTP/2 stream. But with a unary RPC, the request unmarshaling cannot begin
until the last byte of the request is received.


>
> On Thursday, 21 May 2020 17:42:27 UTC-7, Josh Humphries wrote:
>>
>> Is the server doing anything? One reason why streaming typically would
>> outperform unary is because you can begin processing as soon as you receive
>> the first chunk. Whereas with a unary RPC, your handler cannot be called
>> until the entire request has been received and unmarshalled.
>>
>> If this is a load test, where you are sending a significant load at the
>> server and measuring the difference, then the memory access pattern of
>> streaming may be friendlier to your allocator/garbage collector since you
>> are allocating smaller, shorter-lived chunks of memory. (And there is of
>> course the obvious advantage for memory use that you don't need to buffer
>> the entire 100mb when you do streaming.)
>>
>> If this is a no-op server, I would not expect there to be much difference
>> in performance -- in fact, streaming may have a slight disadvantage due to
>> the envelope and less efficient capability for compression (if you are
>> using compression). Depending on the runtime implementation, there could be
>> an advantage just due to pipelining: it's possible that your handler thread
>> is handling unmarshalling of a message in parallel with a framework thread
>> handling I/O and decoding the wire protocol. Whereas with a unary call,
>> it's all handled sequentially.
>>
>> 
>>
>> Josh Humphries
>>
>> FullStory <https://www.fullstory.com/>  |  Atlanta, GA
>>
>> Software Engineer
>>
>> j...@fullstory.com
>>
>>
>> On Thu, May 21, 2020 at 6:17 PM  wrote:
>>
>>> Hey All,
>>>
>>> I have been testing and benchmarking my application with gRPC, I'm using
>>> gRPC C++. I have noticed a performance difference with following cases:
>>>
>>> 1. sending large size payload (100 MB+) with a single unary rpc
>>> 2. breaking the payload into pieces of 1 MB and sending them as messages
>>> using client streaming rpc.
>>>
>>> For both cases, server side will process the data after receiving all of
>>> them and then send a response. I have found that 2 has smaller latency than
>>> 1.
>>>
>>> I don't quite understand in this case why breaking up larger message
>>> into smaller pieces out performs the unary call. Wondering if anyone has
>>> any insight into this.
>>>
>>> I have searched online and found a related github issue regarding
>>> optimal message size for streaming large payload:
>>> https://github.com/grpc/grpc.github.io/issues/371
>>>
>>> Would like to hear any ideas or suggestions.
>>>
>>> Thx.
>>>
>>> Best,
>>> Kevin
>>>
>>> --
>>> You received this message because you are subscribed to the Goo

Re: [grpc-io] Sending a single huge message in one unary call vs sending chunks of messages with streaming

2020-05-21 Thread Josh Humphries
Is the server doing anything? One reason why streaming typically would
outperform unary is because you can begin processing as soon as you receive
the first chunk. Whereas with a unary RPC, your handler cannot be called
until the entire request has been received and unmarshalled.

If this is a load test, where you are sending a significant load at the
server and measuring the difference, then the memory access pattern of
streaming may be friendlier to your allocator/garbage collector since you
are allocating smaller, shorter-lived chunks of memory. (And there is of
course the obvious advantage for memory use that you don't need to buffer
the entire 100mb when you do streaming.)

If this is a no-op server, I would not expect there to be much difference
in performance -- in fact, streaming may have a slight disadvantage due to
the envelope and less efficient capability for compression (if you are
using compression). Depending on the runtime implementation, there could be
an advantage just due to pipelining: it's possible that your handler thread
is handling unmarshalling of a message in parallel with a framework thread
handling I/O and decoding the wire protocol. Whereas with a unary call,
it's all handled sequentially.



Josh Humphries

FullStory <https://www.fullstory.com/>  |  Atlanta, GA

Software Engineer

j...@fullstory.com


On Thu, May 21, 2020 at 6:17 PM  wrote:

> Hey All,
>
> I have been testing and benchmarking my application with gRPC, I'm using
> gRPC C++. I have noticed a performance difference with following cases:
>
> 1. sending large size payload (100 MB+) with a single unary rpc
> 2. breaking the payload into pieces of 1 MB and sending them as messages
> using client streaming rpc.
>
> For both cases, server side will process the data after receiving all of
> them and then send a response. I have found that 2 has smaller latency than
> 1.
>
> I don't quite understand in this case why breaking up larger message into
> smaller pieces out performs the unary call. Wondering if anyone has any
> insight into this.
>
> I have searched online and found a related github issue regarding optimal
> message size for streaming large payload:
> https://github.com/grpc/grpc.github.io/issues/371
>
> Would like to hear any ideas or suggestions.
>
> Thx.
>
> Best,
> Kevin
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/26219adc-254e-4dc2-82a0-2b7f9513d41a%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/26219adc-254e-4dc2-82a0-2b7f9513d41a%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BL2xigkqDLLeY73yRGP39pE6A_9ieH1RhL6CXiOs0j2%2BA%40mail.gmail.com.


Re: [grpc-io] client-side retries and idempotency

2016-08-03 Thread 'Josh Humphries' via grpc.io
A custom method option in the proto file is the way we're doing this at
Square. We also use other custom method options to influence other kinds of
retry policies not implemented in that pull request: latency-triggered
retries, where another attempt is made before the first call completes if
it's taking too long.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAHJZN-sxsLLRB4P1K1wi-n_ps_GMFwSa_tuSHB4X%2BDVr2Yb-9g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.