Are there any guidelines on how-to scale up/down NiFI ? (I know we don;t do autoscaling at present and nodes are independent of each other)
The use-case is : 16,000 text files (csv, xml, json)/per minute totalling 150Gb are getting delivered onto a combination of FTP, S3, Local Filesystem etc. sources. These files are then ingested with some light processing onto a HDFS cluster. My question is : Are there any best-practices, guidelines , ideas on setting up a NiFI cluster for this kind of volume , throughput ? -- View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Capacity-Planning-Guidelines-tp5142.html Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
