Hi Andre,

You are correct that DistrbutedMapCacheServer is a single point of failure.
The name is a little bit misleading, it is distributed in the sense that it
allows the nodes in a cluster to share information, but there is only one
instance of the server.

The good news is that all of the processors that maintain state, such as
the ListSFTP/FetchSFTP processors, now use the internal state management
API which stores state in ZK, and you would typically have 3 ZK instances.

The reason all of these processors still have properties for the
distributed cache is because it attempts to automatically migrate any state
they previously had in distributed cache over to ZK, so that if someone
upgrades from a version before state management, they won't lose their old
state.

At some point, (maybe for 1.0?) it would be really nice to remove all those
old properties and basically say that if you are upgrading from a version
before state management was introduced, you should first upgrade to 0.7 (or
whatever latest 0.x release) and then from there go to 1.0. Unless you
don't care about state then go right to 1.0.

-Bryan



On Wed, May 25, 2016 at 11:38 PM, Andre <[email protected]> wrote:

> Hi there,
>
> I have been playing with ListSFTP + FetchSFTP and just wanted to confirm
> the following:
>
> Lets say someone decides to run the above processors together with
> DistributedMapCacheServer, cluster, wouldn't this introduce a single point
> of failure to the cluster architecture?
>
> My understanding is that clustered NiFi nodes are able to continue running
> in case of the loss of an NCM node, NCM, however reading
> DistributedMapCacheServer I got to the impression the CacheServer must be
> always up for the status to be synchronised across multiple nodes, meaning
> that if the serve goes down, the ListSFTP would either stop or run out of
> sync.
>
> Is this understanding correct?
>
> I thank you in advance
>
>
>

Reply via email to