Thanks for the quick reply. Yes, that is quite correct. The scenario is the following:
The input flow is a "GetFile" process that collects csv files (>100,000 lines) which in turn queues the file and parses each line to a locally built processor (MyImportProcessor say) that submits them via the REST API to a Drupal website. The process works fine, but it is very slow, and would like to speed it up by splitting the csv file into chunks so that it can then spawn "MyImportProcessor" as many times as required. On 06/04/2017 20:47, Jeff wrote: > Hello Stephen, > > It's possible to watch the status of NiFi, and upon observing a > particular status in which you're interested, you can use the REST API > to create new processor groups. You'd also have to populate that > processor group with processors and other components. Based on the > scenario you mentioned, though, it sounds like you are looking at being > able to scale up available processing (via more concurrent threads, or > more nodes in a cluster) once a certain amount of data is queued up and > waiting to be processed, rather than adding components to the existing > flow. Is that correct? > > On Thu, Apr 6, 2017 at 3:30 PM Stephen-Talk > <[email protected] <mailto:[email protected]>> > wrote: > > Hi, I am just a Nifi Inquisitor, > > Is it, or could it be possible to Dynamically spawn a "Processor Group" > when the input flow reaches a certain threshold. > > Thanking you in aniticipation. > Stephen >
