Thanks Michal, I only need to keep very few config data, so zookeeper
should be sufficient for me.😁

2017-05-26 15:12 GMT+08:00 Michal Klempa <[email protected]>:

> Hi Ben,
> according to my experience, I would recommend not to use
> DistributedMapCache at all.
> First of all - its not load balanced / HA and then, it is just a
> HashMap implementation in Java, talking to clients using simple TCP
> (=it consumes RAM from NiFi's Java heap space)
> I guess its in NiFi as a safety net, for simple use cases.
> For any production / heavy traffic use case, I would go for external
> caching / databases (Redis?)
> ZooKeeper is good for any HA / balanced use case, which does not have
> heavy traffic, so if you have it installed and you know how to do it
> in ZooKeeper, go for it.
>
> Regards,
> Michal
>
> On Fri, May 26, 2017 at 2:19 AM, 尹文才 <[email protected]> wrote:
> > Thanks Joe, actually what I want to achieve is one of my processors need
> to
> > write some config data to be kept in cluster so that other processors
> would
> > be able to easily get the config data from the cluster.
> > I've considered using the default NIFI state map to do it, but as I know
> it
> > could only keep data for a specific processor and it could only be
> retrieved
> > by that processor. I've never used Hazelcast, I may
> > just keep the config data in Zookeeper directly.
> >
> > Regards,
> > ben
> >
> > 2017-05-26 8:55 GMT+08:00 Joe Witt <[email protected]>:
> >>
> >> So the client would be on each of nodea, nodeb, and nodec.  The server
> >> would be on nodea, nodeb, nodec.  Each client would be configured to
> >> talk to any one of the three servers nodea, nodeb, or nodec.  It does
> >> not offer HA.  For more complete behavior it is a good idea to have a
> >> client/service implementation that talks to a full caching service.
> >> In the next release you can script out a controller service as a cache
> >> client.  I believe one of the folks in the community has an example
> >> floating around on how to do this to talk to Hazelcast.
> >>
> >> Thanks
> >> Joe
> >>
> >> On Thu, May 25, 2017 at 8:45 PM, 尹文才 <[email protected]> wrote:
> >> > Thanks Joe, so  I need to setup the DistributedMapCacheServer on all
> >> > nodes,
> >> > do you mean all DistributedMapCacheClientService should reference
> only
> >> > one
> >> > of the servers(the same one on the same node)?
> >> > What if the DistributedMapCacheServer goes down on that node(or the
> node
> >> > itself goes down), does it support HA?
> >> >
> >> >
> >> > 2017-05-25 20:46 GMT+08:00 Joe Witt <[email protected]>:
> >> >>
> >> >> Hello
> >> >>
> >> >> You put the DistributedMapCacheServer controller service on as well
> >> >> and then point at it from the client services.  So in your three
> nodes
> >> >> all three will have the server service but on all the clients they'll
> >> >> only point to the server service on one of the nodes.
> >> >>
> >> >> Thanks
> >> >> Joe
> >> >>
> >> >> On Thu, May 25, 2017 at 5:29 AM, 尹文才 <[email protected]> wrote:
> >> >> > Hi guys, I'm currently using NIFI with 3 nodes as a cluster, when
> >> >> > using
> >> >> > DistributedMapCacheClientService there's a configuration property
> >> >> > called 'Server Hostname'. I've tried with localhost on my local
> NIFI
> >> >> > standlone node
> >> >> > and it did work. My question is what should I set inside the NIFI
> >> >> > cluster?
> >> >> > Thanks.
> >> >
> >> >
> >
> >
>

Reply via email to