Hi All,

We have a use case where receiving huge json(file size might vary from 1GB to 
50GB) via http, convert in to XML(xml format is not fixed, any other format is 
fine) and send out using Kafka. - here is the restriction is CPU & RAM usage 
requirement(once it is fixed, it should handle all size files) should not 
getting changed based on incoming file size.

We used ListenHTTP -->SplitRecord -->PublishKafa , but we have observed one 
behaviour where SplitRecord is sending out data to PublishKafa only after whole 
FlowFile processing. Is there any reason why did we design this way? Will it 
not be good if we send out splits  to next processor after each configured 
records instead of all sending all splits at one shot?


Regards,
Hemantha

Reply via email to