I am updating & adding few fields in csv. Hence used UpdateRecord..

Thanks & Regards,
Prashanth

From: Mark Payne [mailto:marka...@hotmail.com]
Sent: Wednesday, June 13, 2018 10:49 PM
To: users@nifi.apache.org
Subject: Re: NiFi Performance Analysis Clarification

Prashanth,

Also of note, are you actually updating any fields in the CSV that you receive 
with UpdateRecord / your custom processor?
Or are you just using that to convert the CSV to Avro? If the latter, you can 
actually just remove this processor from your flow
entirely and simply use PublishKafkaRecord processor with a CSV Reader and an 
Avro Writer.

Thanks
-Mark



On Jun 13, 2018, at 12:56 PM, V, Prashanth (Nokia - IN/Bangalore) 
<prashant...@nokia.com<mailto:prashant...@nokia.com>> wrote:

Joe,
Thanks for the reply.  Please find the answers inline.

Thanks & Regards,
Prashanth

-----Original Message-----
From: Joe Witt [mailto:joe.w...@gmail.com]
Sent: Wednesday, June 13, 2018 6:04 PM
To: users@nifi.apache.org<mailto:users@nifi.apache.org>
Subject: Re: NiFi Performance Analysis Clarification

Prasanth

I strongly recommend you reduce your JVM heap size for NiFi to 2 or 4 and no 
more than 8GB.  The flow, well configured, will certainly not need anywhere 
near that much and the more ram you give it the more work GC has to do (some 
GCs are different and can be tuned/etc.. but ...that is for another day).  I 
used 4GB buffer memory for kafka. So , I think I will retry once with reducing 
my heap memory to 8 GB

You are absolutely right that the log entries you showed are really problematic 
and performance in nifi can be dramatically improved.

The flow I think you're describing is:
- ListSFTP  (Tried with GetSFTP also)
- FetchSFTP
- Convert to Avro (Tried with update record & also created custom processor)
- Publish to Kafka

First, we should look at nifi.properties (Didn’t update much)

Second, we should focus on the processors employed

Third, we should look at the configuration of those processors (I have 12 core 
machine. I give 4 threads to getSFTP -  12 threads to updateRecord – 8 threads 
to publishKafka)

Sounds like you only turned off flow archival and changed heap.  Any other 
settings changes?  I'd recommend putting archival back on as it can allow nifi 
to more efficiently remove data. (I thought that is slowing down .. I will 
enable it and try)

Please list precisely which processors you have and how they're connected.  
Sharing a flow template would be extremely helpful. (Think I answered in above 
questions.)

Thanks

On Wed, Jun 13, 2018 at 8:13 AM, Jeremy Dyer 
<jdy...@gmail.com<mailto:jdy...@gmail.com>> wrote:
> Prashanth - just out of curiosity could you share the average size of
> those Avro files you are pushing to Kafka? It would be nice to know
> for some other benchmark tests I am doing
>
> Thanks,
> Jeremy Dyer
>
> Thanks - Jeremy Dyer
> ________________________________
> From: V, Prashanth (Nokia - IN/Bangalore) 
> <prashant...@nokia.com<mailto:prashant...@nokia.com>>
> Sent: Wednesday, June 13, 2018 8:10:27 AM
> To: Mike Thomsen
> Cc: users@nifi.apache.org<mailto:users@nifi.apache.org>; 
> pierre.villard...@gmail.com<mailto:pierre.villard...@gmail.com>
> Subject: RE: NiFi Performance Analysis Clarification
>
>
> Hi Mike,
>
>
>
> Thanks for the reply. Actually , we did all those optimisations with kafka.
> I am converting to avro, also I configured kafka producer properties
> accordingly. I believe kafka is not a bottleneck.
>
> I am sure because, I can see pretty good throughput with my flow. But
> average throughput is reduced as stop-the-world signal happening for
> long time. Correct me if I am wrong..
>
>
>
> Thanks & Regards,
>
> Prashanth
>
>
>
> From: Mike Thomsen [mailto:mikerthom...@gmail.com]
> Sent: Wednesday, June 13, 2018 4:23 PM
> To: V, Prashanth (Nokia - IN/Bangalore) 
> <prashant...@nokia.com<mailto:prashant...@nokia.com>>
> Cc: users@nifi.apache.org<mailto:users@nifi.apache.org>; 
> pierre.villard...@gmail.com<mailto:pierre.villard...@gmail.com>
> Subject: Re: NiFi Performance Analysis Clarification
>
>
>
> Relevant:
> http://www.idata.co.il/2016/09/moving-binary-data-with-kafka/
>
>
>
> If you're throwing 1MB and bigger files at Kafka, that's probably
> where your slowdown is occurring. Particularly if you're running a
> single node or just two nodes. Kafka was designed to process extremely
> high volumes of small messages (at most 10s of kb, not MB and
> certainly not GB). What you can try is building an Avro schema for
> your CSV files and using PublishKafkaRecord to break everything down into 
> records that are an appropriate fit for Kafka.
>
>
>
> On Wed, Jun 13, 2018 at 6:38 AM V, Prashanth (Nokia - IN/Bangalore)
> <prashant...@nokia.com<mailto:prashant...@nokia.com>> wrote:
>
> Please find answers inline
>
>
>
> Thanks & Regards,
>
> Prashanth
>
>
>
> From: Pierre Villard [mailto:pierre.villard...@gmail.com]
> Sent: Wednesday, June 13, 2018 3:56 PM
>
>
> To: users@nifi.apache.org<mailto:users@nifi.apache.org>
> Subject: Re: NiFi Performance Analysis Clarification
>
>
>
> Hi,
>
>
>
> What's the version of NiFi you're using?  1.6.0
>
> What are the file systems you're using for the repositories? Local
> rhel file system (/home dir)
>
>
>
> I think that changing the heap won't make any different in this case.
> I'd keep it to something like 8GB (unless you're doing very specific
> stuff that are memory consuming) and let the remaining to OS and disk caching.
>
> I think NiFi holds the snapshotmap in memory.. since we are dealing
> with pretty huge ingress data [I allocated 32GB out of 42GB to NiFi].
> Hence, I increased so.  Does this has anything to do with flowfile checkpoint 
> delay?
>
>
>
> Pierre
>
>
>
> 2018-06-13 11:58 GMT+02:00 V, Prashanth (Nokia - IN/Bangalore)
> <prashant...@nokia.com<mailto:prashant...@nokia.com>>:
>
> Hi Mike,
>
>
>
> I am retrieving many small csv files each of size 1MB (total folder
> size around ~100GB). In update step, I am doing some enrichment on ingress 
> csv.
> Anyway my flow doesn’t do anything with the stop the world time right?
>
>
>
> Can you please tell me about flowfile checkpointing related tunings?
>
>
>
> Thanks & Regards,
>
> Prashanth
>
>
>
> From: Mike Thomsen [mailto:mikerthom...@gmail.com]
> Sent: Wednesday, June 13, 2018 2:33 PM
> To: users@nifi.apache.org<mailto:users@nifi.apache.org>
> Subject: Re: NiFi Performance Analysis Clarification
>
>
>
> What are you retrieving (particularly size) and what happens in the "update"
> step?
>
>
>
> Thanks,
>
>
>
> Mike
>
>
>
> On Wed, Jun 13, 2018 at 4:10 AM V, Prashanth (Nokia - IN/Bangalore)
> <prashant...@nokia.com<mailto:prashant...@nokia.com>> wrote:
>
> Hi Team,
>
>
>
> I am doing some performance testing in NiFi. WorkFlow is GetSFTP ->
> update
> -> PutKafka. I want to tune my setup to achieve high throughput
> -> without much
> queuing.
>
> But my throughput average drops during flowfile checkpointing
> duration. I believe stop-the-world  is happening during that time.
>
>
>
> I can roughly read ~100MB/s from SFTP and send almost same to Kafka.
> But every 2 mins, it stops the complete execution. Check below logs
>
>
>
> 2018-06-13 13:24:21,160 INFO [pool-10-thread-1]
> o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of
> FlowFile Repository
>
> 2018-06-13 13:24:49,420 INFO [Write-Ahead Local State Provider
> Maintenance] org.wali.MinimalLockingWriteAheadLog
> org.wali.MinimalLockingWriteAheadLog@cf82c58<mailto:org.wali.MinimalLockingWriteAheadLog@cf82c58>
>  checkpointed with 23
> Records and 0 Swap Files in 39353 milliseconds (Stop-the-world time =
> 3 milliseconds, Clear Edit Logs time = 3 millis), max Transaction ID
> 68
>
> 2018-06-13 13:25:00,165 INFO [pool-10-thread-1]
> o.a.n.wali.SequentialAccessWriteAheadLog Checkpointed Write-Ahead Log
> with 7 Records and 0 Swap Files in 39002 milliseconds (Stop-the-world
> time = 28275 milliseconds), max Transaction ID 316705
>
> 2018-06-13 13:25:00,169 INFO [pool-10-thread-1]
> o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed
> FlowFile Repository with 7 records in 39008 milliseconds
>
>
>
> I think all processor goes in idle state for 39 seconds ☹ .. Please
> guide how to tune it..
>
> I changed the heap memory with 32G [I am testing on 12 core, 48G
> machine]. I disabled content-repository archiving. All other properties 
> remains same.
>
>
>
> Thanks & Regards,
>
> Prashanth
>
>

Reply via email to