malthe opened a new issue, #23539:
URL: https://github.com/apache/pulsar/issues/23539

   ### Search before asking
   
   - [X] I searched in the [issues](https://github.com/apache/pulsar/issues) 
and found nothing similar.
   
   
   ### Motivation
   
   Pulsar now has a pluggable interface for coordination and metadata services, 
see #572 which was resolved through 
[PIP-45](https://github.com/apache/pulsar/wiki/PIP-45:-Pluggable-metadata-interface).
   
   In Apache NiFi, they've done something similar but thus far targeting the 
services offered already by Kubernetes, namely the [Lease 
API](https://kubernetes.io/docs/concepts/architecture/leases/) and 
[ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/):
   
   
https://exceptionfactory.com/posts/2024/08/10/bringing-kubernetes-clustering-to-apache-nifi/
   
   Being internally based currently on etcd, this should perform similarly.
   
   The 
[motivation](https://www.slideshare.net/slideshow/towards-a-zookeeperless-pulsar-etcd-etcd-etcd-pulsar-summit-sf-2022/254531636#3)
 presented at the Pulsar Summit in 2022 applies even more so here:
   
   Small clusters → remove overhead
   - Less components to deploy
   - Easier operations
   
   ### Solution
   
   Include a coordination and metadata backend that uses native Kubernetes 
services.
   
   ### Alternatives
   
   In the past, people have written proxies that surface for example the 
ZooKeeper API on top of etcd, see [zetcd](https://github.com/etcd-io/zetcd). It 
could be argued that an entirely separated service should be written that 
standardizes the use of Kubernetes services for leader election and metadata 
needs.
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to