We had success running  a 3-node cluster in kubernetes using modified
configuration scripts from the AlexJones github repo -
https://github.com/AlexsJones/nifi
Ours is on an internal bare-metal k8s lab configuration, not in a public
cloud at this time, but the basics are the same either way.

- setup nifi as a stateful set so you can scale up or down as needed. When
a pod fails, k8s will spawn another to take its place and zookeeper will
manage the election of the master during transitions.
- manage your certs as K8S secrets.
- you also need to also have a stateful set of zookeeper pods for managing
the nifi servers.
- use persistent volume mounts to hold the flowfile, database, content, and
provenance _repository directories



On Mon, Oct 21, 2019 at 11:21 AM Joe Gresock <[email protected]> wrote:

> Apologies if this has been answered on the list already..
>
> Does anyone have knowledge of the latest in the realm of nifi kubernetes
> support?  I see some pages like https://hub.helm.sh/charts/cetic/nifi,
> and https://github.com/AlexsJones/nifi but am unsure which example to
> pick to start with.
>
> I'm curious how well kubernetes maintains the nifi cluster state with pod
> failures.  I.e., do any of the k8s implementations play well with the nifi
> cluster list so that we don't have dangling downed nodes in the cluster?
> Also, I'm wondering how certs are managed in a secured cluster.
>
> Appreciate any nudge in the right direction,
> Joe
>

Reply via email to