Brian, This is a tough question to respond to meaningfully as it is pretty open ended. You're right that often different flows created by different people will varying levels of optimization, efficiency, simplicity, and so on much like written code.
One of the aspects that helps here is the ability to visualize and follow the logic of flows. Often, messy or complicated flows reveal themselves quite obviously and this can also help. That said, the community has recognized a need for a more effective centralized configuration management mechanism and this is the impetus behind the Apache NiFI Registry subproject. This is where things like flow versions, extensions, and potentially other artifacts can be centrally stored and versioned and accessed via a variety of nifi environments which will also aid learning and building upon the work of others. Today we offer templates but they simply cannot be as useful as this registry concept will be. Thanks Joe On Fri, Apr 14, 2017 at 1:56 PM, Kiran <[email protected]> wrote: > Hello, > > The use of NiFi has steadily grown over the last few months and we have more > people creating data flows. An issue that we have is the quality of the data > flows varies a lot. I was wondering if there were any best practise for > creating data flows? > > I'm guessing this is a issue lots of other people have faced. > > I have found the following documentation: > https://community.hortonworks.com/articles/9782/nifihdf-dataflow-optimization-part-1-of-2.html, > but I was wondering if there was anything else? > > Thanks > > Brian
