Execute seems to have me a bit closer, (I can maintain a dictionary of
results) but I seem to be spawning the same task multiple times.
Example code:
import pprint
from fabric.api import *
from fabric.colors import red, green
pp = pprint.PrettyPrinter(indent=4)
@task
def mytask():
with settings(hide('warnings', 'running', 'stdout',
'stderr','output'),):
cmd = 'sleep 1 && date'
myresult=run(cmd)
if myresult.failed: mystate='FAILED'
else: mystate='OK'
r={'output': str(myresult),
'state' : mystate
}
return(r)
@task
def runtask():
result=execute(mytask)
print env.host_string,
print result[env.host_string]['state']
print '-'*80
print 'DEBUG'
pp.pprint(result)
Output:
$ fab -H host-01,host-02,host-03,host-04 -P -z 2 runtask
[host-01] Executing task 'runtask'
[host-02] Executing task 'runtask'
[host-03] Executing task 'runtask'
[host-04] Executing task 'runtask'
[host-01] Executing task 'mytask'
[host-02] Executing task 'mytask'
[host-03] Executing task 'mytask'
[host-04] Executing task 'mytask'
[host-01] Executing task 'mytask'
[host-02] Executing task 'mytask'
[host-03] Executing task 'mytask'
[host-04] Executing task 'mytask'
host-03 OK
--------------------------------------------------------------------------------
DEBUG
{ 'host-01': { 'output': 'Fri Apr 5 11:48:59 PDT 2013',
'state': 'OK'},
'host-02': { 'output': 'Fri Apr 5 11:48:57 PDT 2013',
'state': 'OK'},
'host-03': { 'output': 'Fri Apr 5 11:48:53 PDT 2013',
'state': 'OK'},
'host-04': { 'output': 'Fri Apr 5 11:48:52 PDT 2013',
'state': 'OK'}}
host-04 OK
--------------------------------------------------------------------------------
DEBUG
{ 'host-01': { 'output': 'Fri Apr 5 11:49:00 PDT 2013',
'state': 'OK'},
'host-02': { 'output': 'Fri Apr 5 11:48:59 PDT 2013',
'state': 'OK'},
'host-03': { 'output': 'Fri Apr 5 11:48:53 PDT 2013',
'state': 'OK'},
'host-04': { 'output': 'Fri Apr 5 11:48:53 PDT 2013',
'state': 'OK'}}
[host-01] Executing task 'mytask'
[host-02] Executing task 'mytask'
[host-03] Executing task 'mytask'
[host-04] Executing task 'mytask'
[host-01] Executing task 'mytask'
[host-02] Executing task 'mytask'
[host-03] Executing task 'mytask'
[host-04] Executing task 'mytask'
host-02 OK
--------------------------------------------------------------------------------
DEBUG
{ 'host-01': { 'output': 'Fri Apr 5 11:49:18 PDT 2013',
'state': 'OK'},
'host-02': { 'output': 'Fri Apr 5 11:49:16 PDT 2013',
'state': 'OK'},
'host-03': { 'output': 'Fri Apr 5 11:49:09 PDT 2013',
'state': 'OK'},
'host-04': { 'output': 'Fri Apr 5 11:49:10 PDT 2013',
'state': 'OK'}}
host-01 OK
--------------------------------------------------------------------------------
DEBUG
{ 'host-01': { 'output': 'Fri Apr 5 11:49:18 PDT 2013',
'state': 'OK'},
'host-02': { 'output': 'Fri Apr 5 11:49:17 PDT 2013',
'state': 'OK'},
'host-03': { 'output': 'Fri Apr 5 11:49:10 PDT 2013',
'state': 'OK'},
'host-04': { 'output': 'Fri Apr 5 11:49:10 PDT 2013',
'state': 'OK'}}
Done.
See how the timestamps keep growing and mytask keeps spawning?
I only want to execute mytask once per host. (but still keeping the
parallel pool moving along..)
Cheers,
--Joel
On Fri, Apr 5, 2013 at 10:19 AM, Jeff Forcier <[email protected]> wrote:
> Hi Joel,
>
> You need to use execute():
>
>
> http://docs.fabfile.org/en/latest/api/core/tasks.html#fabric.tasks.execute
>
> Parallel uses multiprocessing so each per host invocation gets its own
> memory space, which is why env won't work right.
>
> Instead, manually call execute() on the task you need to gather from,
> have that task return whatever value is useful to you, and then
> execute() will return a dict mapping hosts to those return values :)
>
> Best,
> Jeff
>
>
>
> On Fri, Apr 5, 2013 at 1:02 AM, Joel Krauska <[email protected]> wrote:
> > Hello fellow fabric users.
> >
> > During parallel runs, I would like to be able to collect information in
> some
> > sort of shared memory space to review after I'm done.
> >
> > This seems to be tricky when running in parallel. (assumably due to the
> > threading)
> >
> > Example use cases:
> > tracking success vs failed counts after a run
> > stuffing output in a tidy coalesced log after the job runs
> >
> > I tried stuffing the output in to env, but that's apparently not a shared
> > global space....
> >
> > Any guidance on how this might be done?
> > (calling out to a DB/sqllite or even a lockfile wrapped pickle seem like
> > passable solutions, but I'm hoping there's an easier way to collect data
> > during a parallel run and summarize it later...)
> >
> > Help?
> >
> > Is there a cleanup method on exit where I should be looking at env?
> >
> > Thanks,
> >
> > Joel Krauska
> >
> >
> >
> >
> > code snippet:
> >
> > from fabric.api import *
> > from fabric.colors import red, green
> >
> > env.mydata={}
> > env.failcount=0
> > env.goodcount=0
> >
> > @task
> > def mytask():
> > with settings(
> > hide('warnings', 'running', 'stdout', 'stderr','output'),
> > ):
> > result=run("uname")
> > print '-'*80
> > print env.host_string,
> > if result.failed:
> > print red('FAILED')
> > env.failcount+=1
> > else:
> > print green('OK')
> > env.goodcount+=1
> > print result
> > env.mydata[env.host_string]=str(result)
> >
> > print env.mydata
> >
> > # These aren't holding anything useful...
> > print env.mydata
> > print env.failcount
> > print env.goodcount
> >
> >
> >
> > _______________________________________________
> > Fab-user mailing list
> > [email protected]
> > https://lists.nongnu.org/mailman/listinfo/fab-user
> >
>
>
>
> --
> Jeff Forcier
> Unix sysadmin; Python/Ruby engineer
> http://bitprophet.org
>
_______________________________________________
Fab-user mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/fab-user