markap14 commented on PR #6560:
URL: https://github.com/apache/nifi/pull/6560#issuecomment-1287071378

   -1 I don't think this is a change that we want to make at this time. The 
existing value of USE_SPECIFIED_OR_FAIL is correct here. When we startup in 
standalone, not part of a cluster, we want to go ahead and startup, even if the 
instance is missing one of the extensions referenced in the dataflow. When a 
user looks at the UI, this is made clear by the fact that the processor is 
"Ghosted" and tooltips, etc. explain exactly what is wrong, and it is easy to 
address.
   
   However, when a node is part of a cluster, things are a bit more complex. 
Every node in the cluster is expected to be doing the same thing. This change 
would result in some nodes in the cluster doing one thing, while other nodes 
are potentially doing something completely different. For example, if one of 5 
nodes is missing an extension, or has an outdated version of the extension, the 
UI may show that version 1.19 is running. Meanwhile, one node is running 
version 1.15 of that extension, which behaves differently. This can lead to a 
great deal of confusion.
   
   It is also worth noting that making this change would not really enable zero 
downtime rolling upgrades. It would remove one of the roadblocks (with 
significant downside/tradeoffs). However, there are many other things that 
would have to be considered. Things like cluster protocols between nodes, the 
load-balancing protocol, site-to-site protocol, etc.
   
   In short, there are many parts of the codebase that operate on the 
assumption that nodes are homogenous (with respect to the nifi version). This 
change would allow that assertion to no longer hold, which can cause a lot of 
problems, many of which may be very difficult to understand and troubleshoot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to