Hi Carl,
I got the above example from a sample code from google author it self where
they use DNS factory.
https://github.com/saturnism/grpc-java-by-example/blob/master/kubernetes-lb-example/echo-client-lb-dns/src/main/java/com/example/grpc/client/ClientSideLoadBalancedEchoClient.java

Do you have a sample code for picker#subchannel because we are running out
of time for delivery.Surely I 'll do Rn D in this picker/subchannel once
our imminent delivery is completed.

Thanks

On Mon, Aug 13, 2018 at 10:54 PM, 'Carl Mastrangelo' via grpc.io <
[email protected]> wrote:

> Responses inline
>
> On Friday, August 10, 2018 at 7:40:25 PM UTC-7, Isuru Samaraweera wrote:
>>
>> Hi Carl,
>> Thanks for the reply.Due to timeline constraints I switch back to round
>> robin default dns solver as below.It is serving the purpose for the current
>> work load.In future surely I ll do further R n D on Eureka considering your
>> inputs as well.Here is the code I fallabacked to.
>>
>>  ManagedChannelBuilder.forTarget(grpcHostTmp.getHost() + ":" +
>> grpcHostTmp.getPort())
>> .nameResolverFactory(new DnsNameResolverProvider()).exe
>> cutor(messagingThreadPoolExecutor)
>>
>
> I would avoid using DnsNameResolverProvider.  It's in our
> io.grpc.internal package, which means we can make breaking changes to that
> API.   Also, DnsNameResolver is the default name resolver, so there is no
> need to specify it.
>
>
>
>> .loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstan
>> ce()).usePlaintext(true).build());
>>
>>
>> Is there anyway to load balance requests evenly across nodes with
>> DnsNameResolverProvider .Rightnow whats happening is till a node is going
>> down same node takes the load after that node goes down next node takes the
>> load.  You can see the implementation in RoundRobinLoadBalancerFactory.
>> Picker#nextSubchannel().
>>
>
> Are you using streaming RPCs, or unary?   Streaming RPCs can't change
> their backend once started, but every new RPC can.  RoundRobinLoadBalancer
> should pick a new backend for every single RPC.
>
>
>>
>> Is there a way to rotate requests as in active active mode rather than
>> above active passive mode.
>>
>
> I think you may be using PickFirstLoadBalancer which has that behavior.
>
>
>>
>> I am looking for a grpc built in solution without 3rd party eureka ,zoo
>> keeper.Or Do you have plans to incorporate better load balancing apis to
>> the grpc core libs??
>>
>
> We keep most things out of the core library because not everyone needs
> them.   We do promote other implementations of gRPC components here:
> https://github.com/grpc-ecosystem
>
>
>>
>> Thanks,
>> Isuru
>>
>>
>> On Fri, Aug 10, 2018 at 10:41 PM, Carl Mastrangelo <[email protected]>
>> wrote:
>>
>>> I believe you need to create the InetSocketAddress from an IP address,
>>> rather than a host name.  Typically host names are looked up via DNS (not
>>> sure what Eureka returns to you).   If you use the normal InetSocketAddress
>>> constructor, Java will do the DNS lookup for you.
>>>
>>> (It is also possible to not use Java's DNS resolver, but that's more
>>> complex.  I'd avoid it until you know you need it)
>>>
>>> On Thu, Aug 9, 2018 at 11:28 PM Isuru Samaraweera <[email protected]>
>>> wrote:
>>>
>>>> Hi Carl,
>>>> Thanks for the  reply.However when I do the eureka address lookup
>>>> ,lookup seems fine.But at the time stub method is invoked asynchronously
>>>> below error pops up.
>>>>
>>>> Caused by: java.nio.channels.UnresolvedAddressException
>>>> at sun.nio.ch.Net.checkAddress(Net.java:101)
>>>> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
>>>> at io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:83)
>>>> at io.netty.util.internal.SocketUtils$3.run(SocketUtils.java:80)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>> at io.netty.util.internal.SocketUtils.connect(SocketUtils.java:80)
>>>> at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSo
>>>> cketChannel.java:310)
>>>> at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.
>>>> connect(AbstractNioChannel.java:254)
>>>> at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(
>>>> DefaultChannelPipeline.java:1366)
>>>> at io.netty.channel.AbstractChannelHandlerContext.invokeConnect
>>>> (AbstractChannelHandlerContext.java:545)
>>>> at io.netty.channel.AbstractChannelHandlerContext.connect(Abstr
>>>> actChannelHandlerContext.java:530)
>>>> at io.netty.handler.codec.http2.Http2ConnectionHandler.connect(
>>>> Http2ConnectionHandler.java:461)
>>>> at io.netty.channel.AbstractChannelHandlerContext.invokeConnect
>>>> (AbstractChannelHandlerContext.java:545)
>>>> at io.netty.channel.AbstractChannelHandlerContext.connect(Abstr
>>>> actChannelHandlerContext.java:530)
>>>> at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexH
>>>> andler.java:50)
>>>> at io.grpc.netty.ProtocolNegotiators$AbstractBufferingHandler.c
>>>> onnect(ProtocolNegotiators.java:466)
>>>> at io.netty.channel.AbstractChannelHandlerContext.invokeConnect
>>>> (AbstractChannelHandlerContext.java:545)
>>>> at io.netty.channel.AbstractChannelHandlerContext.access$1000(A
>>>> bstractChannelHandlerContext.java:38)
>>>> at io.netty.channel.AbstractChannelHandlerContext$11.run(Abstra
>>>> ctChannelHandlerContext.java:535)
>>>> at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(A
>>>> bstractEventExecutor.java:163)
>>>> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTas
>>>> ks(SingleThreadEventExecutor.java:404)
>>>> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
>>>> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(
>>>> SingleThreadEventExecutor.java:884)
>>>> at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThr
>>>> eadLocalRunnable.java:30)
>>>>
>>>>
>>>>
>>>> I do the lookup as below .
>>>>
>>>> Application application = eurekaClient.getApplication(serviceName);
>>>> List<EquivalentAddressGroup> servers = new ArrayList<>();
>>>> for (InstanceInfo serviceInstance : application.getInstances()) {
>>>> servers.add(new EquivalentAddressGroup(
>>>> InetSocketAddress.createUnresolved(serviceInstance.getHostName(),
>>>> serviceInstance.getPort())));
>>>> }
>>>> listener.onAddresses(servers, Attributes.EMPTY);
>>>>
>>>>
>>>> Lookup looks up addressed properly.
>>>>
>>>> Do you have any clue on why above exception is thrown when stub method
>>>> is invoked??
>>>>
>>>>
>>>>
>>>> On Fri, Aug 10, 2018 at 3:24 AM, 'Carl Mastrangelo' via grpc.io <
>>>> [email protected]> wrote:
>>>>
>>>>> It is safe to share the channel.   The decision of which server is
>>>>> actually up to the load balancer, and it will correctly stop sending
>>>>> traffic to a server if the connection fails.
>>>>>
>>>>> On Thursday, August 9, 2018 at 8:56:08 AM UTC-7, Isuru Samaraweera
>>>>> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> I am trying to use Eureka as a discovery server and do roundrobin
>>>>>> clientside load balancing in the grpc client level.I have 3 GRPC server
>>>>>> nodes registered in Eureka.
>>>>>>
>>>>>> Here is the way I create a ManagedChannel in  the client.
>>>>>>
>>>>>> EurekaClientConfig eurekaClientConfig = new
>>>>>> DefaultEurekaClientConfig();
>>>>>> ManagedChannel channel =  ManagedChannelBuilder
>>>>>>                .forTarget("eureka://" + "service-aeroline-passenger-me
>>>>>> ssaging")
>>>>>>                 .nameResolverFactory(new
>>>>>> EurekaNameResolverProvider(eurekaClientConfig, "9071"))
>>>>>>                 .loadBalancerFactory(RoundRobi
>>>>>> nLoadBalancerFactory.getInstance())
>>>>>>                 .usePlaintext(true)
>>>>>>                 .build();
>>>>>>
>>>>>> My question is is it ok to create one channel and share across the
>>>>>> client application presuming that channel rotation across various nodes 
>>>>>> of
>>>>>> GRPC server is taken care by
>>>>>> the ManagedChannel it self.  i.e if server1 goes down does the
>>>>>> channel automatically diverted to server2 without creating a new channel
>>>>>> object???
>>>>>>
>>>>>> Thanks,
>>>>>> Isuru
>>>>>>
>>>>>>
>>>>>> --
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "grpc.io" group.
>>>>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>>>>> pic/grpc-io/7g4PAv_7Clo/unsubscribe.
>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>> [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/grpc-io/4460b577-ebd2-40d5
>>>>> -aa1f-182a1f4af411%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/grpc-io/4460b577-ebd2-40d5-aa1f-182a1f4af411%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Isuru Samaraweera
>>>>
>>>
>>
>>
>> --
>> Isuru Samaraweera
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/65c00c3a-b675-4944-9583-8225f3d0cbe8%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/65c00c3a-b675-4944-9583-8225f3d0cbe8%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Isuru Samaraweera

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOtGh-qLOr2eSEJEj_eja-6AJP9vt7kPtxNp18nO6RqpfnJLKg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to