exceptionfactory commented on PR #8691:
URL: https://github.com/apache/nifi/pull/8691#issuecomment-2135356741

   > > To the question about manipulating content as it is streamed in, 
`SplitText` and `SplitContent` are capable of reading large files and breaking 
them up into smaller FlowFiles.
   > 
   > Ah fair enough, so if I were to chain FetchFile and SplitContent to split 
a file into 100MB chunks, I should be able to start reading a 200TB file from 
disk and get some 100MB chunks back before the initial load of the 200TB file 
has fully completed?
   
   The exact behavior depends on the Processor in question, but that could be 
possible with a custom implementation. The `ProcessSession` is transactional, 
so sending 100 MB chunks to a particular relationship would require committing 
the transaction in the middle of processing the 200 TB stream. Most Processors 
do not commit the session until processing is complete, but it depends on the 
desired behavior.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to