Re: [grpc-io] Error with pb2 file (with python 3)

2017-03-01 Thread ericYoon
Hi, sorry for late reply.

The IDE i am using is Eclipse 4.6.2 with pydev (PyDev for Eclipse 
5.4.0.201611281236)

When I run python 3.5 and try to import the same error occurs.

eric@diot:~/src/SPT_DIOT/snooopy/src$ PYTHONPATH=. python3
Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from ccodeServer.grpcComm import ccodeMan_pb2
>>> from ccodeServer.grpcComm import ccodeMan_pb2_grpc
Traceback (most recent call last):
 File "", line 1, in 
 File 
"/home/eric/src/SPT_DIOT/snooopy/src/ccodeServer/grpcComm/ccodeMan_pb2_grpc.py"
, line 6, in 
   import ccodeMan_pb2 as ccodeMan__pb2
ImportError: No module named 'ccodeMan_pb2'

After fixing import manually 
Auto generated: import ccodeMan_pb2 as ccodeMan__pb2
Manually fixed: from ccodeServer.grpcComm import ccodeMan_pb2 as 
ccodeMan__pb2

>>> from ccodeServer.grpcComm import ccodeMan_pb2_grpc
>>>

No more error occured.

This might be my path issue but IMHO, import path does not accept relative 
path from Pyhon 3 (in Pyhon 3, relative import path was accepted so there 
was no such issue)

Regards,

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/df6fccac-ce95-4307-ba68-52b94cc7236e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC A6: Retries

2017-03-01 Thread 'Noah Eisen' via grpc.io
The only use case we can think of so far would be an alternative solution
to this routing affinity and hedging interaction. We initially discussed
putting the previously tried addresses in the metadata of an RPC, and then
the actual load balancing service would have access to it. But as
mentioned, this was written off because of the extra overhead.

On Wed, Mar 1, 2017 at 3:54 PM, 'Michael Rose' via grpc.io <
grpc-io@googlegroups.com> wrote:

> > To address your comments, we will be making a small change to the load
> balancing policy with respect to hedging RPCs. The change will support
> passing the local lb_policy a list of previously used addresses. The list
> will essentially be, "if possible, don't choose one of these addresses."
> For most cases this will solve your concern about the relation between
> affinity routing and hedging.
>
> It does! Thank you for your consideration, I definitely look forward to
> testing it out.
>
> > These changes will only occur in the local lb_policy. We do not want to
> send any extra data over the wire due to performance concerns.
>
> Seems reasonable to me. Out of curiosity, are there any use cases for
> doing so (other than perhaps server-aided hedge canceling)?
>
> *Michael Rose*
> Team Lead, Identity Resolution
> *Full*Contact | fullcontact.com
> 
> m: +1.720.837.1357 | t: @xorlev
> [image: Inline image 1]
>
> We’re hiring awesome people!
> See our open positions
> 
>
> On Wed, Mar 1, 2017 at 4:51 PM, Noah Eisen  wrote:
>
>> Hi Michael,
>>
>> To address your comments, we will be making a small change to the load
>> balancing policy with respect to hedging RPCs. The change will support
>> passing the local lb_policy a list of previously used addresses. The list
>> will essentially be, "if possible, don't choose one of these addresses."
>> For most cases this will solve your concern about the relation between
>> affinity routing and hedging.
>>
>> These changes will only occur in the local lb_policy. We do not want to
>> send any extra data over the wire due to performance concerns.
>>
>> gRPC support for affinity routing is ongoing, but this change to the
>> existing policy will make it easier to have hedging and affinity routing
>> work together in the future.
>>
>> On Sun, Feb 12, 2017 at 7:26 PM,  wrote:
>>
>>> > We are not supporting explicit load balancing constraints for retries.
>>> The retry attempt or hedged RPC will be re-resolved through the
>>> load-balancer, so it's up to the service owner to ensure that this has a
>>> low-likelihood of issuing the request to the same backend.
>>>
>>> That seems fairly difficult for any service with request-dependent
>>> routing semantics. Lets use a DFS as an example: many DFSes maintain N
>>> replicas of a given file block. In the case where you send a hedged request
>>> for a block, your likelihood is 1/N of requerying the same DFS node which
>>> might well have a slow disk. At least for us using HDFS, N=3 most of the
>>> time; a therefore 33% chance of requerying the same node. Even assuming a
>>> smart load balancing service which intelligently removes poorly performing
>>> storage nodes from service, it still seems desirable to ensure hedged
>>> requests go to a different node. Not having a story for more informed load
>>> balancing seems like it makes a lot of use cases more difficult than they
>>> need to be.
>>>
>>> Regards,
>>> Michael
>>>
>>> On Sunday, February 12, 2017 at 7:24:59 PM UTC-7, Eric Gribkoff wrote:

 Hi Michael,

 Thanks for the feedback. Responses to your questions (and Josh's
 follow-up question on retry backoff times) are inline below.

 On Sat, Feb 11, 2017 at 1:57 PM, 'Michael Rose' via grpc.io <
 grp...@googlegroups.com> wrote:

> A few questions:
>
> 1) Under this design, is it possible to add a load balancing
> constraints for retried/hedged requests? Especially during hedging, I'd
> like to be able to try a different server since the original server might
> be garbage collecting or have otherwise collected a queue of requests such
> that a retry/hedge to this server will not be very useful. Or, perhaps the
> key I'm looking up lives on a specific subset of storage servers and
> therefore should be balanced to that specific subset. While that's the
> domain of a LB policy, what information will hedging/retries provide to 
> the
> LB policy?
>
>
 We are not supporting explicit load balancing constraints for retries.
 The retry attempt or hedged RPC will be re-resolved through the
 load-balancer, so it's up 

Re: [grpc-io] Re: gRPC A6: Retries

2017-03-01 Thread 'Michael Rose' via grpc.io
> To address your comments, we will be making a small change to the load
balancing policy with respect to hedging RPCs. The change will support
passing the local lb_policy a list of previously used addresses. The list
will essentially be, "if possible, don't choose one of these addresses."
For most cases this will solve your concern about the relation between
affinity routing and hedging.

It does! Thank you for your consideration, I definitely look forward to
testing it out.

> These changes will only occur in the local lb_policy. We do not want to
send any extra data over the wire due to performance concerns.

Seems reasonable to me. Out of curiosity, are there any use cases for doing
so (other than perhaps server-aided hedge canceling)?

*Michael Rose*
Team Lead, Identity Resolution
*Full*Contact | fullcontact.com

m: +1.720.837.1357 | t: @xorlev
[image: Inline image 1]

We’re hiring awesome people!
See our open positions


On Wed, Mar 1, 2017 at 4:51 PM, Noah Eisen  wrote:

> Hi Michael,
>
> To address your comments, we will be making a small change to the load
> balancing policy with respect to hedging RPCs. The change will support
> passing the local lb_policy a list of previously used addresses. The list
> will essentially be, "if possible, don't choose one of these addresses."
> For most cases this will solve your concern about the relation between
> affinity routing and hedging.
>
> These changes will only occur in the local lb_policy. We do not want to
> send any extra data over the wire due to performance concerns.
>
> gRPC support for affinity routing is ongoing, but this change to the
> existing policy will make it easier to have hedging and affinity routing
> work together in the future.
>
> On Sun, Feb 12, 2017 at 7:26 PM,  wrote:
>
>> > We are not supporting explicit load balancing constraints for retries.
>> The retry attempt or hedged RPC will be re-resolved through the
>> load-balancer, so it's up to the service owner to ensure that this has a
>> low-likelihood of issuing the request to the same backend.
>>
>> That seems fairly difficult for any service with request-dependent
>> routing semantics. Lets use a DFS as an example: many DFSes maintain N
>> replicas of a given file block. In the case where you send a hedged request
>> for a block, your likelihood is 1/N of requerying the same DFS node which
>> might well have a slow disk. At least for us using HDFS, N=3 most of the
>> time; a therefore 33% chance of requerying the same node. Even assuming a
>> smart load balancing service which intelligently removes poorly performing
>> storage nodes from service, it still seems desirable to ensure hedged
>> requests go to a different node. Not having a story for more informed load
>> balancing seems like it makes a lot of use cases more difficult than they
>> need to be.
>>
>> Regards,
>> Michael
>>
>> On Sunday, February 12, 2017 at 7:24:59 PM UTC-7, Eric Gribkoff wrote:
>>>
>>> Hi Michael,
>>>
>>> Thanks for the feedback. Responses to your questions (and Josh's
>>> follow-up question on retry backoff times) are inline below.
>>>
>>> On Sat, Feb 11, 2017 at 1:57 PM, 'Michael Rose' via grpc.io <
>>> grp...@googlegroups.com> wrote:
>>>
 A few questions:

 1) Under this design, is it possible to add a load balancing
 constraints for retried/hedged requests? Especially during hedging, I'd
 like to be able to try a different server since the original server might
 be garbage collecting or have otherwise collected a queue of requests such
 that a retry/hedge to this server will not be very useful. Or, perhaps the
 key I'm looking up lives on a specific subset of storage servers and
 therefore should be balanced to that specific subset. While that's the
 domain of a LB policy, what information will hedging/retries provide to the
 LB policy?


>>> We are not supporting explicit load balancing constraints for retries.
>>> The retry attempt or hedged RPC will be re-resolved through the
>>> load-balancer, so it's up to the service owner to ensure that this has a
>>> low-likelihood of issuing the request to the same backend. This is part of
>>> a decision to keep the retry design as simple as possible while satisfying
>>> the majority of use cases. If your load-balancing policy has a high
>>> likelihood of sending requests to the same server each time, hedging (and
>>> to some extent retries) will be less useful regardless. There will be
>>> metadata attached to the call indicating that it's a retry, but it won't
>>> include information about which servers the previous 

Re: [grpc-io] gRPC A6: Retries

2017-03-01 Thread 'Eric Gribkoff' via grpc.io
I think the terminology here gets confusing between initial/trailing
metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was
indeed underspecified in regards to dealing with initial metadata, and will
be updated. I go over all of the considerations in detail below.

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g.,
HEADERS frame, and use the capitalized gRPC rule names from the
specification

.

The gRPC specification ensures that a status (containing a gRPC status
code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS
frame. The only way that the gRPC status code can be contained in the first
HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the
Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers
includes (optional) Custom-Metadata, which is usually what we are talking
about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its
Custom-Metadata, if the gRPC client library notifies the client application
layer of what metadata is (or is not) included, we now have to view the RPC
as committed, aka no longer retryable. This is the only option, as a later
retry attempt could receive different Custom-Metadata, contradicting what
we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with
"initial metadata". It's perfectly valid according to the spec for a server
to send metadata along a stream in its Response-Headers, wait for one hour,
then (without having sent any messages), close the stream with a retryable
error.

However, the proposal that a server include the gRPC status code (if known)
in the initial response is still sound. Concretely, this means: if a gRPC
server has not yet sent Response-Headers and receives an error response, it
should send a Trailers-Only response containing the gRPC status code. This
would allow retry attempts on the client-side to proceed, if applicable.
This is going to be superior to sending Response-Headers immediately
followed by Trailers, which would cause the RPC to become committed on the
client side (if the Response-Header metadata is made available to the
client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends
Response-Headers to open a stream, then eventually closes the stream with
an error without ever sending any messages. Such cases would not be
retryable, but I think it's fair to argue that if the server *has* to send
metadata in advance of sending any responses, that metadata is actually a
response, and should be treated as such (i.e., their metadata just ensured
the RPC will be committed on the client-side).

Rather than either explicitly disallowing such behavior by modifying some
specification (this behavior is currently entirely unspecified, so while
specification is worthwhile, it should be separate from the retry policy
design currently under discussion), we can just change the default server
behavior of C++, and Go if necessary, to match Java. In Java servers, the
Response-Headers are delayed until some response message is sent. If the
server application returns an error status before sending a message, then
Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide
when an RPC is committed based on received Response-Headers. If and while
the client library can guarantee that the presence (or absence) of initial
metadata is not visible to the client application layer, the RPC can be
considered uncommitted. This is an implementation detail that should very
rarely be necessary if the above change is made to default server behavior,
but it would not violate anything in the retry spec or semantics.

Eric

On Wed, Mar 1, 2017 at 11:32 AM, 'Eric Anderson' via grpc.io <
grpc-io@googlegroups.com> wrote:

> On Wed, Mar 1, 2017 at 10:51 AM, 'Mark D. Roth' via grpc.io <
> grpc-io@googlegroups.com> wrote:
>
>> On Wed, Mar 1, 2017 at 10:20 AM, 'Eric Anderson' via grpc.io <
>> grpc-io@googlegroups.com> wrote:
>>
>>> What? That does not seem to be a proper understanding of the text, or
>>> the text is wrongly worded. Why would the RPC be "committed as soon as it
>>> receives the initial metadata"? That isn't in the text... In your example
>>> it seems it would be committed at "the trailing metadata that includes a
>>> status" as long as that status was OK, as per the "an explicit OK status"
>>> in the text.
>>>
>>
>> The language in the above quote is probably not as specific as it should
>> be, at least with respect to the wire protocol.  The intent here is that
>> the RPC should be considered committed when it receives either initial
>> metadata or a payload message.
>>

Re: [grpc-io] Re: (gRPC-java) Why are all services singletons?

2017-03-01 Thread Ryan Michela
Josh, this is exactly what I am talking about.

Each service implementation is a singleton *from the perspective of gRPC*. 
You cannot have more than one service implementation instance handle 
requests from callers. You cannot get a fresh instance for every request.

Is there a specific gRPC design reason that limits the number of service 
implementation instances per server to one?

On Wednesday, March 1, 2017 at 12:03:05 PM UTC-8, Josh Humphries wrote:
>
> I think this is referring to the fact that you bind a single server object 
> for the life of the GRPC server.
> Go: https://github.com/grpc/grpc-go/blob/master/server.go#L276
> Java: 
> https://github.com/grpc/grpc-java/blob/master/compiler/src/testLite/golden/TestService.java.txt#L167
>
> So it's not singleton in a traditional pattern sense -- e.g. global/static 
> singleton. But it is a singleton within the scope of a GRPC server.
>
> This question has come up before. I think, in the past, it has been asked 
> that URL prefixes could be used to route requests for the same service to 
> different instances. For example, POST to 
> "/service1/my.package.MyService/MyMethod" invokes myMethod on some server 
> instance A, and "/service2/my.package.MyService/MyMethod" invokes it for a 
> different instance.
>
> I think the justification in the past has been that this would complicate 
> the protocol as targeting specific implementations of the same service 
> suddenly requires new behavior in both clients and servers. Instead, the 
> recommended pattern is to use metadata (e.g. a header) and have an 
> aggregate implementation re-dispatch to another implementation based on 
> incoming metadata.
>
>
> 
> *Josh Humphries*
> jh...@bluegosling.com 
>
> On Wed, Mar 1, 2017 at 2:36 PM, 'Carl Mastrangelo' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>> Have you actually tried this?  Can you include an error showing that this 
>> is not possible?
>>
>> On Monday, February 27, 2017 at 4:49:42 PM UTC-8, Ryan Michela wrote:
>>>
>>> Each server can only reference one instance of a service implementation 
>>> for the lifetime of the service, and all requests to that service are 
>>> routed concurrently to that single, shared instance, correct? 
>>>
>>> On Monday, February 27, 2017 at 4:39:26 PM UTC-8, Carl Mastrangelo wrote:

 No?  I don't know where you could have got that impression but you can 
 make as many as you like, and share them between Servers as you please.

 On Monday, February 27, 2017 at 3:51:57 PM UTC-8, Ryan Michela wrote:
>
> I mean the instance of the class that implements my service 
> operations. The instance you pass to ServerBuilder.addService(). 
>
> Isn't that instance a singleton from the perspective of gRPC?
>
> On Monday, February 27, 2017 at 12:48:41 PM UTC-8, Carl Mastrangelo 
> wrote:
>>
>> What do you mean by Service?   There are hardly any places in our 
>> code where something is a singleton.  
>>
>> On Saturday, February 25, 2017 at 10:31:59 PM UTC-8, Ryan Michela 
>> wrote:
>>>
>>> I'd like to know the design rationale for why gRPC services 
>>> implementations are all concurrently executing singletons. There are 
>>> many 
>>> possible instancing and threading modes that could have been used.
>>>
>>>- Singleton instancing
>>>- Per-call instancing
>>>- Per-session instancing
>>>
>>>
>>>- Concurrent execution
>>>- Sequential execution
>>>
>>> Concurrent singletons make sense from an absolute throughput angle - 
>>> no object instantiation or blocking. But concurrent singletons are 
>>> hardest 
>>> for developers to work with - service implementors must be keenly aware 
>>> of 
>>> shared state and mult-threading concerns. 
>>>
>>>1. Why was concurrent singleton chosen as the only 
>>>out-of-the-box way to implement gRPC (java) services? 
>>>2. Would API for supporting other threading and instancing modes 
>>>be accepted in a PR?
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/50eb23e0-4092-40f6-9f87-e5fb1a6251e2%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, 

[grpc-io] Re: Retry policy

2017-03-01 Thread Ilina Mitra
Server is in Java and client is in C++

On Tue, Feb 28, 2017 at 8:49 PM, Stanley Cheung 
wrote:

> What language are your server and client implemented in respectively?
>
> On Friday, February 24, 2017 at 11:46:37 AM UTC-8,
> ili...@luminatewireless.com wrote:
>>
>> Hi there,
>>
>> We are in the process of upgrading gRPC from git commit ID
>> c7767bb1244204724c72422c11b8c6a146caef39 (March 2016) to git commit ID
>> 4fe0d977ed1d04cbee6b44d6f30e56e5133287b5 (February 2017).
>>
>> One new behaviour we have observed is, if the client fails to connect to
>> the server, it seems to take longer for the retries to happen. Has the
>> retry policy changed? Previously, once the server was available, the client
>> channel would connect within one or two seconds. Now, it seems to take
>> closer to 10 seconds once the server is available.
>>
>> Ilina
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAMsVFO3ZW-9jt6VFeuhSS69juO5NzUCkk3JXrSsOeEb9OBdzQg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: (gRPC-java) Why are all services singletons?

2017-03-01 Thread Josh Humphries
I think this is referring to the fact that you bind a single server object
for the life of the GRPC server.
Go: https://github.com/grpc/grpc-go/blob/master/server.go#L276
Java: https://github.com/grpc/grpc-java/blob/master/
compiler/src/testLite/golden/TestService.java.txt#L167

So it's not singleton in a traditional pattern sense -- e.g. global/static
singleton. But it is a singleton within the scope of a GRPC server.

This question has come up before. I think, in the past, it has been asked
that URL prefixes could be used to route requests for the same service to
different instances. For example, POST to
"/service1/my.package.MyService/MyMethod"
invokes myMethod on some server instance A, and
"/service2/my.package.MyService/MyMethod"
invokes it for a different instance.

I think the justification in the past has been that this would complicate
the protocol as targeting specific implementations of the same service
suddenly requires new behavior in both clients and servers. Instead, the
recommended pattern is to use metadata (e.g. a header) and have an
aggregate implementation re-dispatch to another implementation based on
incoming metadata.



*Josh Humphries*
jh...@bluegosling.com

On Wed, Mar 1, 2017 at 2:36 PM, 'Carl Mastrangelo' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Have you actually tried this?  Can you include an error showing that this
> is not possible?
>
> On Monday, February 27, 2017 at 4:49:42 PM UTC-8, Ryan Michela wrote:
>>
>> Each server can only reference one instance of a service implementation
>> for the lifetime of the service, and all requests to that service are
>> routed concurrently to that single, shared instance, correct?
>>
>> On Monday, February 27, 2017 at 4:39:26 PM UTC-8, Carl Mastrangelo wrote:
>>>
>>> No?  I don't know where you could have got that impression but you can
>>> make as many as you like, and share them between Servers as you please.
>>>
>>> On Monday, February 27, 2017 at 3:51:57 PM UTC-8, Ryan Michela wrote:

 I mean the instance of the class that implements my service operations.
 The instance you pass to ServerBuilder.addService().

 Isn't that instance a singleton from the perspective of gRPC?

 On Monday, February 27, 2017 at 12:48:41 PM UTC-8, Carl Mastrangelo
 wrote:
>
> What do you mean by Service?   There are hardly any places in our code
> where something is a singleton.
>
> On Saturday, February 25, 2017 at 10:31:59 PM UTC-8, Ryan Michela
> wrote:
>>
>> I'd like to know the design rationale for why gRPC services
>> implementations are all concurrently executing singletons. There are many
>> possible instancing and threading modes that could have been used.
>>
>>- Singleton instancing
>>- Per-call instancing
>>- Per-session instancing
>>
>>
>>- Concurrent execution
>>- Sequential execution
>>
>> Concurrent singletons make sense from an absolute throughput angle -
>> no object instantiation or blocking. But concurrent singletons are 
>> hardest
>> for developers to work with - service implementors must be keenly aware 
>> of
>> shared state and mult-threading concerns.
>>
>>1. Why was concurrent singleton chosen as the only out-of-the-box
>>way to implement gRPC (java) services?
>>2. Would API for supporting other threading and instancing modes
>>be accepted in a PR?
>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/grpc-io/50eb23e0-4092-40f6-9f87-e5fb1a6251e2%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKQEiE67i_vFHLAuJ3KoLrWG22McoufMyGEke07zyqexA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Grpc Server not working: Exception: "Received RST_STREAM with error code 8".

2017-03-01 Thread matthias.weiser via grpc.io
I think that the issue was on my end. 
I did not realize that I need to propagate the assemblyBinding changes in 
the .exe.config.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e9648c37-4b6a-4aa5-ac0f-ad2e0c247ba0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Grpc Server not working: Exception: "Received RST_STREAM with error code 8".

2017-03-01 Thread matthias.weiser via grpc.io
No special processing on the server side.
It seems I can fix the issue if I bind both localhost and the machine IP 
when starting the gRPC server. 
In 1.0.0 it was sufficient to use only the machine IP.
I will need to do some further testing to confirm this.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7e009ad5-72cf-4f01-a493-1d9c3c283088%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.