it's a bit unfortunate the proxy would have to decode enough of the wire protocol to find the lookup commands and responses to build the "routing map". i'd still be tempted to do it with nginx and a lua module. if you're careful, you can avoid deserializing responses unless you have an outstanding lookup request, but you'll still have to keep track of the message framing.
On Friday, June 30, 2017, 12:12:44 PM PDT, Matteo Merli <matteo.me...@gmail.com> wrote: On Thu, Jun 29, 2017 at 6:33 PM, Dave Fisher <dave2w...@comcast.net> wrote: > I mean do you think it would meet the needs of a proxy including SSL? > > I'll look into this more as this proxy design intrigues me. > So, ZooKeeper is more of a distributed coordination service and it doesn't really work as a proxy. We use it in Pulsar to coordinate brokers and storage nodes and to store metadata. One design trait is that we don't want to expose ZK service to our users, since it's a very critical piece of the infrastructure (if ZK is down, the Pulsar cluster cannot operate). Here there is a high-level diagram that shows where ZK is being used in Pulsar https://github.com/apache/incubator-pulsar/blob/master/docs/Architecture.md#architecture As Maurice commented, there are many ways to to do HTTP or TCP proxy, from nginx to Apache TrafficServer, but these won't work to proxy to stateful backend services. This is a common problem for cloud deployment. For Kafka they have the same issue and the solution they offer is to use a REST proxy to expose to the outside world (but that has a huge performance penalty, especially if you need to guarantee the message ordering). For the SSL part, unless you have an L4 proxy such as a VIP, the SSL needs to be terminated at that layer. I think this fits well anyway for most deployment and has the advantage of offloading the SSL portion to the proxy compared to the broker. Matteo -- Matteo Merli <matteo.me...@gmail.com>