We currently run both Nimbus and Supervisor on the same cluster. When running 
'storm kill_workers', I have noticed that all of the workers are killed, but 
then are restarted. In the supervisor log I see the following for each topology:

2019-04-30 16:21:17,571 INFO  Slot [SLOT_19227] STATE KILL_AND_RELAUNCH 
msInState: 5 topo:WingmanTopology998-1-1556594165 worker:f0de5     
54d-81a1-48ce-82e8-9beef009969b -> WAITING_FOR_WORKER_START msInState: 0 
topo:WingmanTopology998-1-1556594165 worker:f0de554d-81a1-48c     
e-82e8-9beef009969b
2019-04-30 16:21:25,574 INFO  Slot [SLOT_19227] STATE WAITING_FOR_WORKER_START 
msInState: 8003 topo:WingmanTopology998-1-1556594165 wo     
rker:f0de554d-81a1-48ce-82e8-9beef009969b -> RUNNING msInState: 0 
topo:WingmanTopology998-1-1556594165 worker:f0de554d-81a1-48ce-82e8-     
9beef009969b

Is this the expected behavior (worker process is bounced, not killed)? I 
thought that kill_workers would essentially run 'storm kill' for each of the 
worker processes.

Reply via email to