More specifically, I mean wrap each of your 40 workflows in a process
group. I have a workflow that processes some financial data, and it has 3
levels of process groups at its most extreme points to group common
functions and isolate edge cases so none of them are distracting when
looking at the data flow from a higher level while it's running. It's about
100 processors total, but the canvas is quite clean because all of the
functionality is neatly encapsulated in well-organized process groups that
allow us to do things like add new sources and then drop them safely when
they're no longer needed.

On Wed, Jul 24, 2019 at 3:39 PM Mike Thomsen <[email protected]> wrote:

> > In NiFi I can use ports, but than I need to connect those ports.
>
> You can wrap each operation in a process group and then connect the
> process groups via ports so your main canvas is substantially less
> cluttered. You can also nest process groups inside of each other. that
> works really well for organizing related functionality.
>
> On Mon, Jul 22, 2019 at 10:17 AM ski n <[email protected]> wrote:
>
>> I work on migrating a large ESB process to a NiFi flow. This process
>> contains around 40 events (40 different flowfiles). On the ESB a loosely
>> coupled pattern was used with the help of JMS queues. In NiFi I can use
>> ports, but than I need to connect those ports. The canvas soon becomes
>> messy.
>>
>> Is there a way to use something like a ‘topic’ in Nifi? So some kind of
>> endpoint without connecting items (processors/process groups) or is this
>> against the dataflow concept and you always need external brokers like
>> Kafka or ActiveMQ for this?
>>
>> Another question is what to do with failure messages. Can you configure a
>> default ‘endpoint’ for all failures within a certain process.? Now I
>> connect all processors to failure handling step/port, but this gets soon
>> messy as well. What is the best practice for errors? Do most use
>> autotermination?
>>
>>
>> Regards,
>>
>>
>> Raymond
>>
>

Reply via email to