Hi Bryan, Thanks for this. I don't remember seeing this setting, but going to check it out.
-Ryan On Thu, May 2, 2019 at 12:43 PM Bryan Bende <[email protected]> wrote: > You can set nifi.flowcontroller.autoResumeState=false to start NiFi > without running everything. > > On Thu, May 2, 2019 at 12:23 PM Ryan H > <[email protected]> wrote: > > > > Hi All, > > > > We spin up multiple instances of NiFi for multiple users in a > containerized environment. A common issue that we run into is users > misconfiguring components such that it will crash NiFi for whatever reason. > The containers will automatically try to restart themselves, but NiFi will > never be able to come back up because of the misconfigurations that caused > the issue in the first place. Primarily because the state of the components > that are poorly/improperly configured were in the "Running" state when the > crash happened, and the same will be true when the restart happens (thus > stuck in a crash loop). > > > > So the options that we know of to handle this are: > > 1. Delete the flow.xml.gz file (hope that they used Registry and have a > backup, not always the case though) (or pull in an archived flow). > > 2. Set the state to everything in the flow.xml.gz file to "Stopped" or > "Disabled" which will allow NiFi to come back up and the problematic > components can be reconfigured before restarting them/trying again. > > > > We obviously expect our users to follow good development practices, but > it isn't always the case. Is there a better way to handle this? > > Is there a flag of some kind that we can pass to NiFi when starting that > says: start the app, but don't start any of the components that are on the > Canvas (turn everything to the "Stopped" state before starting) (./nifi.sh > start --noFlow)? > > Is there a way to "save users from themselves" in cases like this? > > Is there a way for us to make NiFi more resilient to flow > misconfigurations that cause it to crash? > > > > > > As always, any thoughts or input is greatly appreciated. > > > > Cheers, > > > > Ryan H. > > > > >
