Hi All,

yesterday I submitted an issue I have with pytables and multiprocessing to
the forum.
I indicated the possible sources of error but it is more explicit that i
give the piece of code going wrongly .. here it is :
if mp:
        if N_proc == None:
            N_proc = multiprocessing.cpu_count()
        print "Running in // on %d processes"%(N_proc)
        pool = multiprocessing.Pool(processes=N_proc)
        result = pool.imap(func, inf.iter_all_row)
        # inf.iter_all_row is an iterator which returns the "list" of the
rows from the 2D NPKData "inf"
        # rows are returns as 1D NPKData, to which all methods can be
applied
    else:
        print "Running on 1 process"
        result = itertools.imap(func, inf.iter_all_row)
    # then collect
    i=0
    #time.sleep(10)
    for d1D in result:
        if i%64 == 5:
            print "\nproc row %d"%(i)
        outf.set_row(i,d1D)
        i += 1

It consists of two parts.. one part preparing the multiprocessing and an
other to treat the data and stock them (loop "for d1D etc.. ")
inf is my source file in format npkdata ( special format binded to hdf5 file
).. outf is of the same format npkdata and binded  to an hdf5 file also.. .
I was told  to open and close the files used so as not to create conflict
between processes each time a calculus is finished.
But I can't see how to realize that in my code. Do I have to close the hdf5
files after outf.set_row each time and just after open the hdf files again?
 how can I proceed?
Thanks!

Cheers
Lionel
------------------------------------------------------------------------------
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to