Our cluster arrived in a state where I was unable to empty a couple of queues. One queue partially emptied, 224 deleted out of 370, leaving 136 stuck. I believe this state was created when all three disks on the cluster filled up -- logs weren't being managed. After cleaning up the disks I was still unable to clear the queues, so I restarted the cluster. After restarting the node and/or cluster -- restarted the cluster where as I should have just restarted the affected node -- I was able to empty the queues.
The only item of interest in the logs while the queues were stuck was the following: [10.80.53.108] out: 2017-10-10 11:04:22,781 INFO [NiFi Web Server-31515] o.a.n.controller.StandardFlowFileQueue Initiating drop of FlowFiles from FlowFileQueue[id=a13c595c-015e-1000-0000-00004be5e64d] on behalf of nderraugh (request identifier=06d0b7d9-015f-1000-ffff-ffff800b7b85) [10.80.54.234] out: 2017-10-10 11:04:22,778 INFO [NiFi Web Server-249845] o.a.n.controller.StandardFlowFileQueue Initiating drop of FlowFiles from FlowFileQueue[id=a13c595c-015e-1000-0000-00004be5e64d] on behalf of nderraugh (request identifier=06d0b7d9-015f-1000-ffff-ffff800b7b85) [10.80.52.161] out: 2017-10-10 11:04:22,788 INFO [NiFi Web Server-186824] o.a.n.controller.StandardFlowFileQueue Initiating drop of FlowFiles from FlowFileQueue[id=a13c595c-015e-1000-0000-00004be5e64d] on behalf of nderraugh (request identifier=06d0b7d9-015f-1000-ffff-ffff800b7b85) I realize this is probably outside the bounds of acceptable system administration. But if a bug is warranted I'm happy to try and provide any other info that might help.
