Hi Brandon, As a personal favor :-) Which actually makes it easier to check the args...
@task def task_choser(): host, values, task = env.host_string.split('__') args = { k:v for k, v in [ arg.split('=', 1) for arg in values.split(',') ] } if task == 'monitor_task': if not args.has_key('rackname'): raise ValueError('A rackname is required for monitor_task') return execute(task,hosts=[host],rackname=args['rackname'])[host] else: for argkey in ('load_node','load_base','load_max'): if not args.has_key(argkey): raise ValueError('A %s is required for run_load') return execute(task,hosts=[host],load_node=args['load_node'],load_base=args['load_base'], load_max=args['load_max'])[host] I essentially took your test script as (BTW, I'm running Python 2.7.12 and Fabric 1.13.2): rob@robs-xubuntu2: [Projects]$ cat test_brandon.py #!/usr/bin/python from fabric.api import * from pprint import pprint @task def hostname(): return run('hostname') @task def uname(): return run('uname -a') @task def task_chooser(): # only consider up to the first underscore to be host data host, task = env.host_string.split('_', 1) results = execute('%s' % task, hosts=[host]) return results @task def parallel_runner(): host_list=[ '10.245.129.185_hostname', '10.245.129.185_uname', '10.245.129.186_hostname', '10.245.129.186_uname' ] with settings(parallel=True): results = execute(task_chooser, hosts=host_list) pprint(results) return results if __name__ == '__main__': execute(parallel_runner) When I run it I get: rob@robs-xubuntu2: [Projects]$ test_brandon.py [10.245.129.185_hostname] Executing task 'task_chooser' [10.245.129.185_uname] Executing task 'task_chooser' [10.245.129.186_hostname] Executing task 'task_chooser' [10.245.129.186_uname] Executing task 'task_chooser' Fatal error: 'uname' is not callable or a valid task name Aborting. Fatal error: 'hostname' is not callable or a valid task name Aborting. Fatal error: 'uname' is not callable or a valid task name Aborting. Fatal error: 'hostname' is not callable or a valid task name Aborting. Fatal error: One or more hosts failed while executing task 'task_chooser' Aborting. On Tue, Jun 19, 2018 at 2:47 AM Brandon Whaley <redkr...@gmail.com> wrote: > > Hmm, I'm not sure why run_parallel would throw an error like that. I'd be > interested to see the full stack trace. You actually shouldn't need to use > load_fabfile or commands.update, just using execute(run_parallel) should > work. I'll take some time tomorrow and try to replicate your issue. > > > P.S. > As a personal favor for my sanity, I ask that you not use exec(). Here's an > example of parsing an argument list like the one you're using exec() on: > > >>> import json > >>> values = 'load_node=10.10.0.1,load_base=0,load_max=1000' > >>> args = { k: v for k, v in [ arg.split('=', 1) for arg in > >>> values.split(',') ] } > >>> print json.dumps(args, indent=4) > { > "load_node": "10.10.0.1", > "load_base": "0", > "load_max": "1000" > } > > You'd then check for args['load_node'] instead of using the local variable > load_node. > > > On Tue, Jun 19, 2018 at 1:43 AM Rob Marshall <rob.marshal...@gmail.com> wrote: >> >> Hi, >> >> So I modified your code a bit and ended up with something like this: >> >> @task >> def monitor_task(rackname): >> cmd = [ >> 'run_rack_monitor', >> '--rack',rackname >> ] >> >> return run(' '.join(cmd)) >> >> @task >> def run_load(load_node,load_base,load_max): >> cmd = [ >> 'run_system_load', >> '--datanode',load_node, >> '--base-value',str(load_base), >> '--max-value',str(load_max), >> ] >> >> return run(' '.join(cmd)) >> >> @task >> def task_choser(): >> host, values, task = env.host_string.split('__') >> for value in values.split(','): >> exec(value) >> >> if task == 'monitor_task': >> return execute(task,hosts=[host],rackname=rackname) >> else: >> return >> execute(task,hosts=[host],load_node=load_node,load_base=load_base,load_max=load_max) >> >> @task >> def run_parallel(): >> host_list = [ >> '10.10.0.2__rackname="rackname01"__monitor_task', >> '10.10.0.2__rackname="rackname02"__monitor_task', >> '10.10.0.2__rackname="rackname03"__monitor_task', >> >> '10.10.0.1__load_node="10.10.0.1",load_base=0,load_max=1000__run_load', >> >> '10.10.0.2__load_node="10.10.0.2",load_base=1000,load_max=2000__run_load', >> >> '10.10.0.3__load_node="10.10.0.3",load_base=2000,load_max=3000__run_load', >> >> '10.10.0.4__load_node="10.10.0.4",load_base=3000,load_max=4000__run_load', >> >> '10.10.0.5__load_node="10.10.0.5",load_base=4000,load_max=5000__run_load', >> >> '10.10.0.6__load_node="10.10.0.6",load_base=5000,load_max=6000__run_load', >> ] >> >> with settings(parallel=True): >> results = execute(task_choser,hosts=host_list) >> >> return results >> >> Which allows me to pass in arguments to the tasks. I did run into one >> odd thing: If I just tried to run run_parallel() as a function I got >> an error: >> >> Fatal error: '...' is not callable or a valid task name >> >> So what I ended up doing (not sure if there's a better way) was: >> >> from fabric.main import load_fabfile >> from fabric.state import commands >> ... >> >> docstring, callables, default = load_fabfile(__file__) >> commands.update(callables) >> >> with settings(hide('everything'),user='username',password='password1'): >> results = execute('run_parallel') >> >> That seemed to work. >> >> Thanks, >> >> Rob >> On Mon, Jun 18, 2018 at 4:57 PM Brandon Whaley <redkr...@gmail.com> wrote: >> > >> > Hi Rob, I've done this as a hack in the past by adding data to the host >> > list and parsing it before execution to determine what to run. I've built >> > a simple example to give you an idea: >> > >> > @task >> > def hostname(): >> > return run('hostname') >> > >> > @task >> > def uname(): >> > return run('uname -a') >> > >> > @task >> > def task_chooser(): >> > # only consider up to the first underscore to be host data >> > host, task = env.host_string.split('_', 1) >> > return execute(task, hosts=[host])[host] >> > >> > @task >> > def parallel_runner(): >> > host_list=[ >> > 'host1_hostname', >> > 'host1_uname', >> > 'host2_hostname', >> > 'host2_uname' >> > ] >> > with settings(parallel=True): >> > execute(task_chooser, hosts=host_list) >> > >> > [host1_hostname] Executing task 'task_chooser' >> > [host1_uname] Executing task 'task_chooser' >> > [host2_hostname] Executing task 'task_chooser' >> > [host2_uname] Executing task 'task_chooser' >> > [host2] Executing task 'uname' >> > [host2] Executing task 'hostname' >> > [host1] Executing task 'uname' >> > [host2] run: uname -a >> > [host1] Executing task 'hostname' >> > [host2] run: hostname >> > [host1] run: uname -a >> > [host1] run: hostname >> > [host1] out: Linux host1 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 >> > 12:16:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux >> > [host1] out: >> > >> > [host2] out: host2 >> > [host2] out: >> > >> > [host2] out: Linux host2 4.4.0-63-generic #84-Ubuntu SMP Wed Feb 1 >> > 17:20:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux >> > [host2] out: >> > >> > [host1] out: host1 >> > [host1] out: >> > >> > >> > Done. >> > >> > >> > On Mon, Jun 18, 2018 at 3:00 PM Rob Marshall <rob.marshal...@gmail.com> >> > wrote: >> >> >> >> Hi, >> >> >> >> I'm trying to run multiple commands on the same host in parallel but >> >> if I try to run a list of commands based on env.host_string it doesn't >> >> run those commands in parallel. Is there a way to do that? >> >> >> >> I guess, in essence, I'd like to "nest" parallel commands. I >> >> originally attempted to place the host in the hosts list multiple >> >> times, but it looks like parallel removes duplicates (I assume this >> >> has to do with separating results by host). >> >> >> >> Thanks, >> >> >> >> Rob >> >> >> >> _______________________________________________ >> >> Fab-user mailing list >> >> Fab-user@nongnu.org >> >> https://lists.nongnu.org/mailman/listinfo/fab-user _______________________________________________ Fab-user mailing list Fab-user@nongnu.org https://lists.nongnu.org/mailman/listinfo/fab-user