Yes , I am planning to write one . But wanted to check if we can do it in a trivial way.
Thanks for help Matthias. On Fri, Oct 16, 2015 at 2:19 PM, Matthias J. Sax <[email protected]> wrote: > I would recommend to write a (bash) script. > > Personally, I use the following assembly of scripts to start/stop a > cluster: > > The two main scripts are "start-storm.sh" and "stop-storm.sh". All other > are just helpers. > > > -Matthias > > cat start-storm.sh > > #!/bin/bash > > > > tools=/home/tools > > > > # check if <storm.yaml> and <stormnodes> exist > > if [ ! -f $HOME/storm.yaml ] > > then > > echo "ERROR: <storm.yaml> not found in HOME directroy" > > exit -1 > > fi > > > > if [ ! -f $HOME/stormnodes ] > > then > > echo "ERROR: <stormnodes> not found in HOME directroy" > > exit -1 > > fi > > > > if [ $# -lt 1 ] > > then > > echo "ERROR: missing parameter <version>" > > exit -1 > > fi > > > > if [ $1 -eq "7" ] > > then > > VERSION=0.7.0 > > elif [ $1 -eq "8" ] > > then > > VERSION=0.8.2 > > elif [ $1 -eq "9" ] > > then > > VERSION=0.9.3 > > else > > echo "ERROR: unknow version number" > > echo "valid parameter values: 7, 8, 9" > > exit -1 > > fi > > > > echo "Switching to Storm $VERSION" > > switch-storm.sh $1 > > > > echo "Starting Zookeeper..." > > # step into zookeeper directory because <zookeeper.out> will be written > locally > > cd /var/zookeeper/ && /opt/zookeeper/bin/zkServer.sh start > > > > echo "Distributing storm.yaml" > > # need to step into $HOME, because remote target dir of cpAll is current > local dir > > cd $HOME && $tools/cpAll storm.yaml $HOME/stormnodes > > $tools/exeAll -l "rm -f /opt/storm/conf/storm.yaml; ln -s > $HOME/storm.yaml /opt/storm/conf/storm.yaml" $HOME/stormnodes > > > > echo "Starting Nimbus..." > > /opt/storm/bin/storm nimbus & > > > > sleep 15 > > > > echo "Starting Supervisors..." > > for node in `cat $HOME/stormnodes` > > do > > echo " at $node" > > ssh $node "nohup /opt/storm/bin/storm supervisor &" & > > done > > > > echo "Starting Storm UI..." > > /opt/storm/bin/storm ui & > > > cat stop-storm.sh > > #!/bin/bash > > > > tools=/home/tools > > > > # check if <stormnods> exists > > if [ ! -f $HOME/stormnodes ] > > then > > echo "ERROR: <stormnodes> not found in HOME directroy" > > exit -1 > > fi > > > > > > > > echo "Stopping Strom UI..." > > kill `ps -ef | grep backtype.storm.ui.core | grep -v grep | sed 's/ */ > /g' | cut -d' ' -f 2` > > > > echo "Stopping Supervisors" > > for node in `cat $HOME/stormnodes` > > do > > echo " at $node" > > ssh $node "kill \`ps -ef | grep backtype.storm.daemon.supervisor | > grep -v grep | sed 's/ */ /g' | cut -d' ' -f 2\`" > > done > > > > echo "Stopping Nimbus..." > > kill `ps -ef | grep backtype.storm.daemon.nimbus | grep -v grep | sed > 's/ */ /g' | cut -d' ' -f 2` > > > > echo "Stopping Zookeeper..." > > /opt/zookeeper/bin/zkServer.sh stop > > > > echo "Cleaning working and logging directories..." > > $tools/exeAll -l "rm -rf /var/storm/*" $HOME/stormnodes > > $tools/exeAll -l "rm -rf /opt/storm/logs/*" $HOME/stormnodes > > rm -rf /var/zookeeper/* > > > cat exeAll > > #!/bin/bash > > > > param=$1 > > rc=0 > > > > if [ $# -lt 1 ] > > then > > echo "ERROR: exeAll expects paremters" > > param="-h" > > rc=1 > > fi > > > > if [ "$param" = "-h" ] > > then > > echo "usage: exeAll ([-l] <command> [<nodefile>]) | -h" > > echo " -l: execute command locally" > > echo " <command>: the command to be executed" > > echo " <nodefile>: file containing name of nodes to run command" > > echo " -h: displays this help" > > exit $rc > > fi > > > > command=$1 > > nodefile=/home/tools/nodes > > > > if [ $# -gt 1 ] # there are optinal parameters > > then > > if [ "$1" = "-l" ] # parameters: -l <command> [<nodefile>] > > then > > command=$2 > > if [ $# -gt 2 ] # parameter <nodefile> present > > then > > nodefile=$3 > > fi > > > > echo "locally: $command" > > # execute command locally > > eval $command > > else # parameters: <command> <nodefile> > > nodefile=$2 > > fi > > fi > > > > if [ ! -f $nodefile ] > > then > > echo "ERROR: nodefile '$nodefile' not found." > > exit 1 > > fi > > > > # execute command remotely > > path=`pwd` > > for node in `cat $nodefile` > > do > > if [ $node = $HOSTNAME ] > > then > > continue > > fi > > > > echo "$node: $command" > > ssh $node "cd $path && $command" > > done > > > cat switch-storm.sh > > #!/bin/bash > > > > tools=/home/tools > > > > if [ ! -f $HOME/stormnodes ] > > then > > echo "ERROR: <stormnodes> not found in HOME directroy" > > exit -1 > > fi > > > > $tools/exeAll -l "sudo sst '$@'" $HOME/stormnodes > > > cat sst > > #!/bin/bash > > > > if [ $# -lt 1 ] > > then > > echo "ERROR: missing parameter <version>" > > exit -1 > > fi > > > > if [ $1 -eq "7" ] > > then > > VERSION=0.7.0 > > elif [ $1 -eq "8" ] > > then > > VERSION=0.8.2 > > elif [ $1 -eq "9" ] > > then > > VERSION=0.9.3 > > else > > echo "ERROR: unknow version number" > > echo "valid parameter values: 7, 8, 9" > > exit -1 > > fi > > > > cd /opt > > rm storm > > ln -s storm-$VERSION storm > > cat cpAll > > #!/bin/bash > > > > param=$1 > > rc=0 > > > > if [ $# -lt 1 ] > > then > > echo "ERROR: cpAll expects paremters" > > param="-h" > > rc=1 > > fi > > > > if [ "$param" = "-h" ] > > then > > echo "usage: cpAll (<filelist> [<nodefile>]) | -h" > > echo " <filelist>: list of files to be copied" > > echo " <nodefile>: file containing name of nodes to run command" > > echo " -h: displays this help" > > exit $rc > > fi > > > > files=$1 > > nodefile=/home/tools/nodes > > > > if [ $# -gt 1 ] # there are optinal parameters > > then > > nodefile=$2 > > fi > > > > if [ ! -f $nodefile ] > > then > > echo "ERROR: nodefile '$nodefile' not found." > > exit 1 > > fi > > > > # copy files > > path=`pwd` > > for node in `cat $nodefile` > > do > > if [ $node = $HOSTNAME ] > > then > > continue > > fi > > > > echo "$node:" > > scp $files "$node:$path" > > done > > > > On 10/16/2015 10:10 AM, Ankur Garg wrote: > > Hi , > > > > I have a single node storm cluster set up . Currently to start storm > > nimbus and supervisors daemons , I use storm nimbus and storm supervisor > > commands. > > > > > > To Stop it , currently I am doing kill -9 to kill those processes > > manually . > > > > Is there something I can use to restart cluster with one single command. > > > > Thanks > > Ankur > >
