Re: [grpc-io] python - how to access client certificate from server

2016-12-29 Thread 'Arthur Wiebe' via grpc.io
Cool thanks.

On Thu, Dec 29, 2016 at 2:27 PM Nathaniel Manista 
wrote:

> On Wed, Dec 28, 2016 at 11:19 AM, Nathaniel Manista 
> wrote:
>
> Consider filing an issue in our issue tracker
> ?
>
>
> Never mind about filing a new issue; here's the current issue that for
> some reason I missed in my searching yesterday
> .
> -N
>
-- 
Arthur Wiebe | +1 519-670-5255

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALmrN2Y_r8YpwOFb4MVA8uucg7xMB_UppkTwjxydCnpvgn1cPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] python - how to access client certificate from server

2016-12-29 Thread 'Nathaniel Manista' via grpc.io
On Wed, Dec 28, 2016 at 11:19 AM, Nathaniel Manista 
wrote:

> Consider filing an issue in our issue tracker
> ?
>

Never mind about filing a new issue; here's the current issue that for some
reason I missed in my searching yesterday
.
-N

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAEOYnASRs8c3xYCVS2c_RRqG_o_dv6o8--ShY-m9qPv-c17sGQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] Servers in PHP?

2016-12-29 Thread 'Jayant Kolhe' via grpc.io
Hi Zack,

Please let us know if you have run into any bumps.. We would love to see it
come up on gRPC ecosystem since a lot of users have expressed interest in
it..

Thanks,

 - Jayant


On Fri, Dec 9, 2016 at 2:14 PM,  wrote:

> Just a quick update -- we've cleared the open sourcing with the rest of
> the team and are working to extract some proprietary bits.
>
> Will get something up ASAP to be evaluated for admission into the gRPC
> ecosystem.
>
> On Thursday, December 8, 2016 at 1:35:42 AM UTC-6, scott molinari wrote:
>>
>> I'd also be very interested in your PHP server solution. A php
>> microservice platform could be one of those things, which would help PHP
>> lose that "red headed step child of a language" rap it gets. :-)
>>
>> Scott
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/0c67e504-d046-45d4-ae75-74cba27e5df6%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALLfn8AzN96qPvvNU1xehVq0QpQjnAm5aEnb5ZEZkD%3Dm1GFgWA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to reproduce the latency and qps benchmark numbers for grpc-java?

2016-12-29 Thread 'Carl Mastrangelo' via grpc.io
The benchmarks are run using LoadClient and LoadServer, in a sibling 
directory.  The test you ran is design to maximize the number of QPS rather 
than minimize latency.  The latency benchmarks are designed to be 
controlled by a coordinating process found the in C core repo 
. 
 There are separate scripts to build the java code, and then run them.  The 
running code is 
here: 
https://github.com/grpc/grpc/blob/master/tools/run_tests/run_performance_tests.py


As a general note: more channels (in Java) don't result in higher QPS.  The 
Channel abstraction is designed to be reused by lots of threads, so you 
usually only need one.  Using 32 in your case almost certainly will hurt 
performance.  (I personally use 4).

On Thursday, December 22, 2016 at 9:11:41 AM UTC-8, aruf...@gmail.com wrote:
>
> Hi!
>
> The benchmark page shows that grpc-java has a unary call latency of ~300us 
> and a qps of ~150k between 2 8-cores VMs.
>
>
> https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5712453606309888
>
> How do I reproduce these numbers with AsyncClient and AsyncServer? What 
> are the command line parameters used for producing these numbers?
>
> I ranthe benchmark on a pair of 16-cores VMs with 40 Gbps network in the 
> same datacenter. 
>
> I ran server with:
> java io.grpc.benchmarks.qps.AsyncServer --address=0.0.0.0:9000 
> --transport=netty_nio
>
> and client with:
> java io.grpc.benchmarks.qps.AsyncClient --address=server:9000 
> --transport=netty_nio
>
> and results:
>
> Channels:   4
>
> Outstanding RPCs per Channel:   10
>
> Server Payload Size:0
>
> Client Payload Size:0
>
> 50%ile Latency (in micros): *2151*
>
> 90%ile Latency (in micros): 8087
>
> 95%ile Latency (in micros): 10607
>
> 99%ile Latency (in micros): 17711
>
> 99.9%ile Latency (in micros):   39359
>
> Maximum Latency (in micros):413951
>
> QPS:*10917*
>
>
> For optimizing latency I ran server with --directexecutor and client with 
> --channels=1 --outstanding_rpcs=1
>
> Channels:   1
>
> Outstanding RPCs per Channel:   1
>
> Server Payload Size:0
>
> Client Payload Size:0
>
> 50%ile Latency (in micros): *617*
>
> 90%ile Latency (in micros): 1011
>
> 95%ile Latency (in micros): 2025
>
> 99%ile Latency (in micros): 7659
>
> 99.9%ile Latency (in micros):   18255
>
> Maximum Latency (in micros):125567
>
> QPS:1094
>
>
> For optimizing throughput I ran client with --directexecutor and 
> --channels=32 and --outstanding_rpcs=1000
>
> Channels:   32
>
> Outstanding RPCs per Channel:   1000
>
> Server Payload Size:0
>
> Client Payload Size:0
>
> 50%ile Latency (in micros): 167935
>
> 90%ile Latency (in micros): 520447
>
> 95%ile Latency (in micros): 652799
>
> 99%ile Latency (in micros): 1368063
>
> 99.9%ile Latency (in micros):   2390015
>
> Maximum Latency (in micros):3741695
>
> QPS:120428
>
>
> Without --directexecutor in the server and client with --channels=32 and 
> --outstanding_rpcs=1000
>
> Channels:   32
>
> Outstanding RPCs per Channel:   1000
>
> Server Payload Size:0
>
> Client Payload Size:0
>
> 50%ile Latency (in micros): 347135
>
> 90%ile Latency (in micros): 1097727
>
> 95%ile Latency (in micros): 1499135
>
> 99%ile Latency (in micros): 2330623
>
> 99.9%ile Latency (in micros):   3735551
>
> Maximum Latency (in micros):6463487
>
> QPS:55969
>
>
> What is the recommended configuration to achieve the claimed throughput of 
> 150k qps? What are the parameters used for generating the numbers? I'm not 
> able to find that anywhere.
>
>
> Thanks!
>
>
> Alpha
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0831834f-9738-4cc0-b11c-4e1703119f41%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [grpc-java] Lots of Stream Error messages while doing Server-side streaming RPC

2016-12-29 Thread 'Carl Mastrangelo' via grpc.io
What is the domain you are connecting to, and do you have security setup? 
 I looks like you are able to connect to the server, but it is shutting 
down the connection before you can start the RPC.

On Thursday, December 22, 2016 at 1:00:36 AM UTC-8, Ankur Chauhan wrote:
>
>
> I am building a grpc server that queries Google cloud bigtable and based 
> on a user request and delivers a stream of de-serialized (protobuf) rows to 
> the user.
>
> I have noticed that there are a lot of "Stream Error" messages in logs:
>
> "Stream Error
> io.netty.handler.codec.http2.Http2Exception$StreamException: Stream closed 
> before write could take place
> at 
> io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:147)
> at 
> io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$FlowState.cancel(DefaultHttp2RemoteFlowController.java:487)
> at 
> io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$FlowState.cancel(DefaultHttp2RemoteFlowController.java:468)
> at 
> io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$1.onStreamClosed(DefaultHttp2RemoteFlowController.java:103)
> at 
> io.netty.handler.codec.http2.DefaultHttp2Connection.notifyClosed(DefaultHttp2Connection.java:343)
> at 
> io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.removeFromActiveStreams(DefaultHttp2Connection.java:1168)
> at 
> io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.deactivate(DefaultHttp2Connection.java:1116)
> at 
> io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultStream.close(DefaultHttp2Connection.java:522)
> at 
> io.netty.handler.codec.http2.DefaultHttp2Connection.close(DefaultHttp2Connection.java:149)
> at 
> io.netty.handler.codec.http2.Http2ConnectionHandler$BaseDecoder.channelInactive(Http2ConnectionHandler.java:181)
> at 
> io.netty.handler.codec.http2.Http2ConnectionHandler.channelInactive(Http2ConnectionHandler.java:374)
> at 
> io.grpc.netty.NettyServerHandler.channelInactive(NettyServerHandler.java:274)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:256)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:235)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:360)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:325)
> at 
> io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:726)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:256)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:235)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:256)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744)
> at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418)
> at 
> io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:312)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:745)
>
>
> My method is pretty basic and the following snippet captures the essence 
> of the service call.
>
> final Stream rowStream = streamFromBigTable(request);
> final ServerCallStreamObserver responseObserver = 
> (ServerCallStreamObserver) _responseObserver;
> StreamObservers.copyWithFlowControl(rowStream.iterator(), 
> responseObserver);
>
> Can someone elaborate on the origin on these error messages? They seem bad 
> but I can't seem to find how to control them.
>
> -- Ankur Chauhan
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

[grpc-io] C++ : lifetime of stream requests?

2016-12-29 Thread Christian Rivasseau
Hi,

in ClientAsyncReaderWriter::Write(const T& msg, tag),
is it OK do delete 'msg' after Write() returns, or should it be
kept alive until 'tag' was notified?

Thanks a lot,


-- 
Christian Rivasseau
Co-founder and CTO @ Lefty 
+33 6 67 35 26 74

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAJ6g4%3Db6Nhsb%3Dn8hm700-Z_1k%2Bxp7xAg272qv2-J9kTi-%3DZEiw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.