All,

I am currently using NiFi 1.2.0 on a Linux (RHEL) machine. I am using a single 
instance without any clustering. My machine has ~800GB of RAM and 2.5 TB of 
disk space (SSD's with RAID5). I have set my Java heap space values to below in 
"bootstrap.conf" file

# JVM memory settings
java.arg.2=-Xms40960m
java.arg.3=-Xmx81920m

# Some custom Configurations
java.arg.7=-XX:ReservedCodeCacheSize=1024m
java.arg.8=-XX:CodeCacheMinimumFreeSpace=10m
java.arg.9=-XX:+UseCodeCacheFlushing

Now, the problem that I am facing when I am stress testing this instance is 
whenever the Read/Write of Data feeds reach the limit of 5GB (at least that's 
what I observed) the whole instance is running super slow meaning the flowfiles 
are moving very slow in the queues. It is heavily affecting the other Processor 
groups as well which are very simple flows. I tied to read the system 
diagnostics at that point and see that all the usage is below 20% including 
heap Usage, flowFile and content repository usage. I tried to capture the 
status history of the Process Group at that particular point and below are some 
results.


[cid:image001.png@01D2F3EA.80145A40]



[cid:image002.png@01D2F3EA.80145A40]




>From the above images it is obvious that the process group is working on lot 
>of IO at that point. Is there a way to increase the throughput of the instance 
>given my requirement which has tons of read/writes every hour. Also to add all 
>my repositories (flowfile , content and provenance) are on the same disk. I 
>tried to increase all the memory settings I possibly can in both 
>bootstrap.conf and nifi.properties , but no use the whole instance is running 
>very slow and is processing minimum amount of flowfiles. Just to make sure I 
>created a GenerateFlowfile processor when the system is slow and to my 
>surprise the rate of flow files generated is less that one per minute (which 
>should fill the queue in less than 5 secs under normal circumstances). Any 
>help on this would be much appreciated.


Thanks
Karthik











Reply via email to