You might also consider using a DNS RoundRobin entry.  i.e., have
my-zookeeper-dnsrr-hostname resolve to all of the IPs (or hostnames) of
your ZooKeeper cluster.  Then when the cluster changes you update DNS and
perform a rolling restart of your storm cluster to pick up the new set of
ZK servers.  It's essentially the same as hardcoding the server list from
the application's perspective, but gives you a layer of indirection.

- Erik

On Tuesday, May 5, 2015, Daniel Compton <[email protected]>
wrote:

> Sorry, I'm not working on that project anymore and so don't have the files
> to share. However the gist of it was
>
> 1. Assign roles to your machines with Puppet
> 2. Use puppet to manage your Kafka config, creating a
> server.properties.erb template
> 3. Query Puppet for all of the machines running the Zookeeper role
> 4. In the zookeeper.connect parameter, build a comma separated string of
> the ips of each machine running the Zookeeper role.
>
> A slightly simpler alternative is to hardcode zookeeper.connect with the
> IP's of your Zookeepers in your puppet config. Then when you want to scale
> up or down, or migrate to different IPs, make the change in your puppet
> config and propagate it out to your Kafka nodes.
>
> On Wed, May 6, 2015 at 1:10 AM Jeff Maass <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>>  Care to share any of your puppet recipes?
>>
>>   From: Daniel Compton <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> Reply-To: "[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>" <
>> [email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> Date: 2015,Monday, May 4 at 20:14
>>
>> To: "[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>" <
>> [email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> Subject: Re: Use Load Balancer instead of zookeeper IPs
>>
>>   We used Puppet to generate the Kafka config list, so if there was a
>> change to the Zookeeper server inventory, we could regenerate the Kafka
>> config automatically, and then do a rolling restart to pick up the new
>> Zookeeper servers.
>>
>> On Tue, May 5, 2015 at 8:58 AM Dillian Murphey <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>
>>> It seems to work, but we're just starting to use the system. As I get
>>> more developers experimenting we'll see how it goes. There are still a lot
>>> of people out there not benefiting from AWS services like autoscaling
>>> groups and load balancers so it seems there isn't a lot of documention on
>>> using AWS for zookeeper clusters. I'm curious what other people are doing
>>> to dynamically change their config files when a zookeeper cluster needs to
>>> be scaled up. They just doing this all manually?  Elastic is the future. So
>>> hence why these questions are coming up.
>>>
>>> On Mon, May 4, 2015 at 6:55 AM, Andrew Medeiros <[email protected]
>>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>>
>>>>  Brian,
>>>>
>>>>  I am under the same situation of not wanting to hardcore the
>>>> zookeeper IPs otherwise it makes autoscaling pretty impossible. Have you
>>>> tried putting your zookeeper cluster behind an ELB yet? If so what where
>>>> your results? Thank you!
>>>>
>>>>  Cheers,
>>>> Andrew Medeiros
>>>>
>>>> —
>>>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>>>
>>>>
>>>>  On Thu, Apr 30, 2015 at 11:32 AM, Brian Fleming <[email protected]
>>>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>>>
>>>>>  I'm working in AWS.
>>>>>
>>>>>  I have a load balancer in front of my zookeeper cluster.
>>>>>
>>>>>  Can I use the load balancer DNS alias instead of entering the
>>>>> individual IPs in the config?  This way I don't need to constantly change
>>>>> the IP addresses if they change.
>>>>>
>>>>>  I guess if every operation to zookeeper is transactional, it should
>>>>> be fine, right?
>>>>>
>>>>>  Thanks
>>>>>
>>>>
>>>>
>>>

Reply via email to