Hello.

I've to make an architecture based on nifi to collect & route data from sources 
to some hadoop/ES cluster.


Sources will have different constraints (from 50msg/s to hundred of thousand, 
not the same latency necessities, not the same protocol, etc.).


I wonder if we should make a processor per data source (e.g. one port for each 
data source), or I can send to a processor per protocol, and route based on 
some attribute afterwards.


I we use a single entry processor per protocol for all data sources, won't 
there be risks on the shared queue, in case of data storm for example?


thanks for any pointer / answer.


Aurélien.

Reply via email to