Expanding on this answer further
> The data passes through the said proxy.
Ok, so that makes sense. So basically:
+ --> B1
|
C ---> P --> B2
|
+ --> B3
If this diagram doesn't get broke, the issue you have is that the Proxy
can't have access to the certs but each of the backends may have different
certs, which may be self signed.
Also assuming there isn't SSL today, then you can make traffic to the proxy
use either port 80 or 443 to distinguish between plaintext and secure. The
client will need to know the CA cert that signed the TLS certs for B1, B2,
and B3. Each backend has their own key, and own cert. When the client
connects, the server uses it's own cert, trusted by the client via the CA
cert.
On Tue, Oct 16, 2018 at 4:47 PM Carl Mastrangelo <[email protected]> wrote:
> There are a few options. The key words to look for are "L7" loadbalancing
> and "L4" loadbalancing. For L7, your entry point to the load balancer,
> typically some kind of reverse proxy, decodes the TLS and then forwards the
> traffic to the correct backend. Your client sends traffic to the proxy
> which then decides which of the available backends is least loaded. For
> L4, there is still a reverse proxy, but it does not decode TLS. Instead,
> it forwards all the encrypted data to a backend IP address, again deciding
> where to send based on load. (or using roudn robin even). The benefit
> of L7 load balancing is that it can make smarter decisions about where to
> send traffic, but has a downside that it's slightly slower. L4 is nice
> because it does not need the TLS certs (as the hardware may not be
> trusted), but can't decide which backend to route requests to.
>
> In both cases, the client always sends traffic to the same place, which is
> in charge of routing to the next hop. Also in both cases, the LB proxy
> needs to know all the backends available to send traffic to, and a way of
> telling if they are healthy. Depending on how big your architecture is,
> even these two approaches are not enough, but let's not get too complicated
> too quickly.
>
> In gRPC LB, the approach is more different than the above two. Instead, a
> dedicated load balancing service (i.e. gRPCLB) is contacted by the client
> at startup and asks for addresses to connect to. The gRPCLB service can
> send a list of backend IP addresses to use, as well as relative weights for
> how much traffic each BE should take. This is probably the most scalable
> approach, because it avoids the intermediate proxy altogether. However,
> there is no premade gRPCLB server available and you would have to implement
> the protocol yourself.
>
> HTH,
> Carl
>
> On Tuesday, October 16, 2018 at 9:57:34 AM UTC-7, [email protected]
> wrote:
>>
>> We're setting up a mobile application (objective-c) that communicates
>> back to the server (go) using gRPC. We intend to place those servers
>> behind a Netscaler load balancer. We now have a requirement to encrypt the
>> messages going through. How would we configure the client/server/load
>> balancer to accept and forward on the messages with TLS back to the
>> individual servers? We thought about attempting the 1st certificate, and
>> if that fails, try the subsequent ones. That seems a very fragile
>> approach. How does secure load balancing happen in gRPC world?
>>
>
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit
https://groups.google.com/d/msgid/grpc-io/CAAcqB%2BsQwxVnBiCp3UvvEB%3DJhA-VWv2eW1z%3DxyhvpxUO-LWg5Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.