[grpc-io] bazel build analysis failure

2018-09-18 Thread Rui Shi
Hi,

I was trying to build grpc suing bazel but failed at the analysis stage 
without any useful information:

bazel build :all
INFO: Build options have changed, discarding analysis cache.
ERROR: build interrupted
INFO: Elapsed time: 56.195s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (39 packages loaded)

Any idea on what the problem is?

Thanks

Rui



-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c5c2e35f-b2ca-44e6-8028-6aca5937d218%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [grpc-java] InvalidProtocolBufferException: Protocol message contained an invalid tag (zero) error

2018-09-18 Thread Anthony Corbacho
Yes, both are written in Java and I am using plaintext.

On Tuesday, September 18, 2018 at 7:58:16 PM UTC-4, Carl Mastrangelo wrote:
>
> Are both your client and server written using Java?   Also, are you using 
> TLS or plaintext?   
>
> On Monday, September 17, 2018 at 8:21:30 PM UTC-7, Anthony Corbacho wrote:
>>
>> Hi,
>>
>> This is strange I have enabled the same log but I only see RST_STREAM.
>> Do I need to do something else?
>>
>> # GRPC debugging
>> log4j.logger.io.grpc.netty.NettyServerHandler=ALL
>> log4j.logger.io.grpc.netty.NettyClientHandler =ALL
>>
>>
>>
>> On Monday, September 17, 2018 at 4:52:02 PM UTC-4, Carl Mastrangelo wrote:
>>>
>>> Here's what i use to turn it on:  
>>> https://gist.github.com/carl-mastrangelo/49f6d6a8ff29200fcb7d9e25e473b2d0
>>>
>>> On Monday, September 17, 2018 at 11:39:47 AM UTC-7, Anthony Corbacho 
>>> wrote:

 Hi Carl,

 Thanks for the fast answer.
 How can I enable `netty debug log frame that's for DATA`?

 thanks~.

 On Monday, September 17, 2018 at 1:38:20 PM UTC-4, Carl Mastrangelo 
 wrote:
>
> You should look for a netty debuglog frame that's for DATA, not 
> RST_STREAM.   That should show you the corrupted message.  
>
> There are also some hooks into the core gRPC library that (while more 
> complicated) will let you examine the message bytes.   By using a custom 
> Marshaller, you can peak at the bytes and then delegate the remaining 
> message to the protobuf Marshaller.   You can see how to wire up a 
> Marshaller by looking in the generated code for the MethodDescriptor. 
>
> On Sunday, September 16, 2018 at 2:38:26 PM UTC-7, Anthony Corbacho 
> wrote:
>>
>> Hello,
>> I am new to Grpc and so far like it very much.
>>
>> I am using a bidirectional stream and from time to time I get an 
>> exception like this one:
>>
>> io.grpc.StatusRuntimeException: CANCELLED: Failed to read message. at 
>> io.grpc.Status.asRuntimeException(Status.java:526) at 
>> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:418)
>>  
>> at 
>> io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
>>  
>> at 
>> io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:663)
>>  
>> at 
>> io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
>>  
>> at 
>> io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:392)
>>  
>> at 
>> io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:443) 
>> at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at 
>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:525)
>>  
>> at 
>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:446)
>>  
>> at 
>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:510)
>>  
>> at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at 
>> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) 
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  
>> at java.lang.Thread.run(Thread.java:748) Caused by: 
>> io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence 
>> at 
>> io.grpc.Status.asRuntimeException(Status.java:517) at 
>> io.grpc.protobuf.lite.ProtoLiteUtils$2.parse(ProtoLiteUtils.java:168) at 
>> io.grpc.protobuf.lite.ProtoLiteUtils$2.parse(ProtoLiteUtils.java:82) at 
>> io.grpc.MethodDescriptor.parseResponse(MethodDescriptor.java:265) at 
>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:498)
>>  
>> ... 5 more Caused by: 
>> com.google.protobuf.InvalidProtocolBufferException: 
>> Protocol message contained an invalid tag (zero). at 
>> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:105)
>>  
>> at 
>> com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:646)
>>  
>> at 
>> com.zepl.notebook.service.grpc.NotebookResponse.(NotebookResponse.java:46)
>>  
>> at 
>> com.zepl.notebook.service.grpc.NotebookResponse.(NotebookResponse.java:13)
>>  
>> at 
>> com.zepl.notebook.service.grpc.NotebookResponse$1.parsePartialFrom(NotebookResponse.java:2851)
>>  
>> at 
>> com.zepl.notebook.service.grpc.NotebookResponse$1.parsePartialFrom(NotebookResponse.java:2846)
>>  
>> at 

[grpc-io] Re: gRPC building server and client

2018-09-18 Thread 'Carl Mastrangelo' via grpc.io
The boss loop determines which threads can call accept() (in low level 
networking terminology), while the worker loop does the reads and writes.  
They boss loop can be the same as the worker loop. This is the default.

The executor runs the ClientCall.Listener and ServerCall.Listener callbacks 
.  It is more important to set the executor.  The event loops are more for 
advanced users, and they require being familiar with Netty.  

Hope this helps!

On Tuesday, September 18, 2018 at 3:58:42 AM UTC-7, qplc wrote:
>
> HI,
>
> NettyServerBuilder while constructing grpc server:
> The primary difference between boss and worker event loop group. What is 
> the role of executor, boss and worker event loop group while constructing 
> NettyServerBuilder?
>
> NettyChannelBuilder while constructing grpc client:
> What is the role of executor and event loop group? How are they different 
> from each other?
>
> Does the executor persist same role in building grpc server or client?
>
> Can someone clear above doubts?
>
> Regards,
> qplc
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ff6929fd-125d-4c57-a0d3-bcb80db5905c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [grpc-java] InvalidProtocolBufferException: Protocol message contained an invalid tag (zero) error

2018-09-18 Thread 'Carl Mastrangelo' via grpc.io
Are both your client and server written using Java?   Also, are you using 
TLS or plaintext?   

On Monday, September 17, 2018 at 8:21:30 PM UTC-7, Anthony Corbacho wrote:
>
> Hi,
>
> This is strange I have enabled the same log but I only see RST_STREAM.
> Do I need to do something else?
>
> # GRPC debugging
> log4j.logger.io.grpc.netty.NettyServerHandler=ALL
> log4j.logger.io.grpc.netty.NettyClientHandler =ALL
>
>
>
> On Monday, September 17, 2018 at 4:52:02 PM UTC-4, Carl Mastrangelo wrote:
>>
>> Here's what i use to turn it on:  
>> https://gist.github.com/carl-mastrangelo/49f6d6a8ff29200fcb7d9e25e473b2d0
>>
>> On Monday, September 17, 2018 at 11:39:47 AM UTC-7, Anthony Corbacho 
>> wrote:
>>>
>>> Hi Carl,
>>>
>>> Thanks for the fast answer.
>>> How can I enable `netty debug log frame that's for DATA`?
>>>
>>> thanks~.
>>>
>>> On Monday, September 17, 2018 at 1:38:20 PM UTC-4, Carl Mastrangelo 
>>> wrote:

 You should look for a netty debuglog frame that's for DATA, not 
 RST_STREAM.   That should show you the corrupted message.  

 There are also some hooks into the core gRPC library that (while more 
 complicated) will let you examine the message bytes.   By using a custom 
 Marshaller, you can peak at the bytes and then delegate the remaining 
 message to the protobuf Marshaller.   You can see how to wire up a 
 Marshaller by looking in the generated code for the MethodDescriptor. 

 On Sunday, September 16, 2018 at 2:38:26 PM UTC-7, Anthony Corbacho 
 wrote:
>
> Hello,
> I am new to Grpc and so far like it very much.
>
> I am using a bidirectional stream and from time to time I get an 
> exception like this one:
>
> io.grpc.StatusRuntimeException: CANCELLED: Failed to read message. at 
> io.grpc.Status.asRuntimeException(Status.java:526) at 
> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:418)
>  
> at 
> io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
>  
> at 
> io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:663)
>  
> at 
> io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
>  
> at 
> io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:392)
>  
> at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:443) 
> at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:525)
>  
> at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:446)
>  
> at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:510)
>  
> at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at 
> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> at java.lang.Thread.run(Thread.java:748) Caused by: 
> io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence 
> at 
> io.grpc.Status.asRuntimeException(Status.java:517) at 
> io.grpc.protobuf.lite.ProtoLiteUtils$2.parse(ProtoLiteUtils.java:168) at 
> io.grpc.protobuf.lite.ProtoLiteUtils$2.parse(ProtoLiteUtils.java:82) at 
> io.grpc.MethodDescriptor.parseResponse(MethodDescriptor.java:265) at 
> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:498)
>  
> ... 5 more Caused by: com.google.protobuf.InvalidProtocolBufferException: 
> Protocol message contained an invalid tag (zero). at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:105)
>  
> at 
> com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:646)
>  
> at 
> com.zepl.notebook.service.grpc.NotebookResponse.(NotebookResponse.java:46)
>  
> at 
> com.zepl.notebook.service.grpc.NotebookResponse.(NotebookResponse.java:13)
>  
> at 
> com.zepl.notebook.service.grpc.NotebookResponse$1.parsePartialFrom(NotebookResponse.java:2851)
>  
> at 
> com.zepl.notebook.service.grpc.NotebookResponse$1.parsePartialFrom(NotebookResponse.java:2846)
>  
> at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:91) 
> at 
> com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49) at 
> io.grpc.protobuf.lite.ProtoLiteUtils$2.parseFrom(ProtoLiteUtils.java:173) 
> at 

[grpc-io] gRPC Community Meetings

2018-09-18 Thread 'April Kyle Nassi' via grpc.io
Hi all! We've been holding community meetings via Zoom on every other
Thursday at 11am PST for a while now, and I've been hearing from some in
the community that it's hard for them to attend at that time. I'd like to
get an idea of what days/times would work best for people. I've set up a
Doodle poll to get feedback. Please check it out and indicate when you'd
be available.

Mark your availability at https://doodle.com/poll/w6na98wd34b52na4
*NOTE: Dates are listed there; but ignore those. Focus on the day/time
combo as we'd select the same time for each recurring meeting.*

Thanks for taking a look, and please reach out if you have any questions!
These community meetings are for you to connect with others working on
gRPC, get help, and share the cool stuff you're working on. If you'd like
to add an item to the agenda for a future meeting, please add it to the
working doc! http://bit.ly/grpcmeetings




*April Kyle Nassi, Program Manager*

Google, Inc. | Open Source Strategy | Developer Relations

345 Spear Street, San Francisco, CA 94105


ana...@google.com | @thisisnotapril 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAgWDxJYjMk%2BJGETm7M8Enu0xFV4zsc782fmpOHRYa0%2BHJotmA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Client Loadbalancing with Kubernetes SVC

2018-09-18 Thread julien . senon
Thanks a lot for your explanation. I will have a look on 2 solution 
provided.



On Tuesday, September 18, 2018 at 5:40:45 PM UTC+2, Josh Humphries wrote:
>
> Unless things have changed recently, the default kubedns result for a 
> standard k8s service will have a single IP: the virtual IP of the service. 
> This in turn causes the gRPC client to configure just a single socket 
> connection and route all requests on the one socket. Since kubeproxy load 
> balancing through that virtual IP is layer 4, this can result in very poor 
> load balancing, especially if you have unbalanced clients (e.g. a small 
> number of clients that constitute the majority of RPC traffic).
>
> If you use "headless" services in k8s on the other hand, the DNS query 
> should return multiple pod IPs. This causes the gRPC client to in turn 
> maintain multiple connections, one to each destination pod. However, I am 
> not sure how responsive this will be to topology changes (as pod instances 
> are auto-scaled, or killed and rescheduled, they will move around and 
> change IP address). It would require disabling DNS caching and making sure 
> the service host name is resolved/polled regularly.
>
> Another solution is to use a custom gRPC resolver that talks to the k8s 
> API to watch the service topology and convey the results to the gRPC 
> client. For Go, this is implemented in an open-source package: 
> github.com/sercand/kuberesolver
>
> (Most of my experience is with Go. So your mileage may vary if using a 
> runtime other than Go. But I think the various implementations largely 
> behave quite similarly.)
> 
> *Josh Humphries*
> jh...@bluegosling.com 
>
>
> On Tue, Sep 18, 2018 at 7:10 AM > wrote:
>
>> Hello,
>>
>> Does it make sense to have client loadbalancer with gRPC when we are 
>> using gRPC server in a Kubernetes cluster? 
>> Because client will dial a service DNS and will always retrieve IP of 
>> service and not IP of Pods behind it.
>>
>> NB: If already seen this blog entry 
>> https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
>>
>> Thanks
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/63490220-692a-49fd-84bd-62ef13b19e09%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c1dd0c1a-fc49-42a1-aec9-3fe11903f2f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Client Loadbalancing with Kubernetes SVC

2018-09-18 Thread Josh Humphries
Unless things have changed recently, the default kubedns result for a
standard k8s service will have a single IP: the virtual IP of the service.
This in turn causes the gRPC client to configure just a single socket
connection and route all requests on the one socket. Since kubeproxy load
balancing through that virtual IP is layer 4, this can result in very poor
load balancing, especially if you have unbalanced clients (e.g. a small
number of clients that constitute the majority of RPC traffic).

If you use "headless" services in k8s on the other hand, the DNS query
should return multiple pod IPs. This causes the gRPC client to in turn
maintain multiple connections, one to each destination pod. However, I am
not sure how responsive this will be to topology changes (as pod instances
are auto-scaled, or killed and rescheduled, they will move around and
change IP address). It would require disabling DNS caching and making sure
the service host name is resolved/polled regularly.

Another solution is to use a custom gRPC resolver that talks to the k8s API
to watch the service topology and convey the results to the gRPC client.
For Go, this is implemented in an open-source package:
github.com/sercand/kuberesolver

(Most of my experience is with Go. So your mileage may vary if using a
runtime other than Go. But I think the various implementations largely
behave quite similarly.)

*Josh Humphries*
jh...@bluegosling.com


On Tue, Sep 18, 2018 at 7:10 AM  wrote:

> Hello,
>
> Does it make sense to have client loadbalancer with gRPC when we are using
> gRPC server in a Kubernetes cluster?
> Because client will dial a service DNS and will always retrieve IP of
> service and not IP of Pods behind it.
>
> NB: If already seen this blog entry
> https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
>
> Thanks
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/63490220-692a-49fd-84bd-62ef13b19e09%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BLoC4Pdy50tuhdfZ_qO%2BoEB3LWX%3Dg4DE_-_ODb%2BR_M5fA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Client Loadbalancing with Kubernetes SVC

2018-09-18 Thread julien . senon
Hello,

Does it make sense to have client loadbalancer with gRPC when we are using 
gRPC server in a Kubernetes cluster? 
Because client will dial a service DNS and will always retrieve IP of 
service and not IP of Pods behind it.

NB: If already seen this blog 
entry https://github.com/grpc/grpc/blob/master/doc/load-balancing.md

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/63490220-692a-49fd-84bd-62ef13b19e09%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] gRPC building server and client

2018-09-18 Thread qplc
HI,

NettyServerBuilder while constructing grpc server:
The primary difference between boss and worker event loop group. What is 
the role of executor, boss and worker event loop group while constructing 
NettyServerBuilder?

NettyChannelBuilder while constructing grpc client:
What is the role of executor and event loop group? How are they different 
from each other?

Does the executor persist same role in building grpc server or client?

Can someone clear above doubts?

Regards,
qplc

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8c42332a-760b-417d-9c8d-09f2c65f1190%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.