We also have created simple wrapper scripts for common operations.

On Sat, Apr 21, 2018 at 2:20 AM, Peter Bukowinski <pmb...@gmail.com> wrote:

> One solution is to build wrapper scripts around the standard kafka
> scripts. You’d put your relevant cluster parameters (brokers, zookeepers)
> in a single config file (I like yaml), then your script would import that
> config file and pass the appropriate parameters to the kafka command. You
> could call the wrapper scripts by passing the name of the cluster as an
> argument and then passing the standard kafka options, e.g.
>
> ktopics --cluster my_cluster --list
>
>
> -- Peter Bukowinski
>
> > On Apr 20, 2018, at 3:23 AM, Horváth Péter Gergely <
> horvath.peter.gerg...@gmail.com> wrote:
> >
> > Hello All,
> >
> > I wondering if there is any way to avoid having to enter the host URLs
> for
> > each Kafka CLI command you execute.
> >
> > This is kind of tedious as different CLI commands require specifying
> > different servers (--broker-list, --bootstrap-server and --zookeeper);
> > which is especially painful if the host names are long, and only slightly
> > different (e.g. naming scheme for AWS:
> > ec2-12-34-56-2.region-x.compute.amazonaws.com).
> >
> > I know I could simply export shell variables for each type of endpoint
> and
> > refer that in the command, but that still only eases the pain:
> > export KAFKA_ZK=ec2-12-34-56-2.region-x.compute.amazonaws.com
> > bin/kafka-topics.sh --list --zookeeper ${KAFKA_ZK}
> >
> > Is there by any chance a better way of doing this I am not aware of?
> > Technically I am looking for some solution where I don't have to remember
> > that a Kafka CLI command expects --broker-list, --bootstrap-server or
> > --zookeeper, but can specify these settings once.
> >
> > Thanks,
> > Peter
>

Reply via email to