My system was benched for reads and writes with Blosc[1]:

with pt.openFile(paths.braw(block), 'r') as handle:
    pt.setBloscMaxThreads(1)
    %timeit  a = handle.root.raw.c042[:]
    pt.setBloscMaxThreads(6)
    %timeit  a = handle.root.raw.c042[:]
    pt.setBloscMaxThreads(11)
    %timeit  a = handle.root.raw.c042[:]
    print handle.root.raw._v_attrs.FILTERS
    print handle.root.raw.c042.__sizeof__()
    print handle.root.raw.c042

gives


1 loops, best of 3: 483 ms per loop
1 loops, best of 3: 782 ms per loop
1 loops, best of 3: 663 ms per loop
Filters(complevel=5, complib='blosc', shuffle=True, fletcher32=False)
104
/raw/c042 (CArray(303390000,), shuffle, blosc(5)) ''


I can't understand what is going on, for the life of me. These datasets use
int16 atoms and at Blosc complevel=5 used to compress by a factor of about
2. Even for such low compression ratios there should be huge differences
between single- and multi-threaded reads.

Do you have any clue?

-รก.

[1] http://blosc.pytables.org/trac/wiki/SyntheticBenchmarks (first two
plots)
------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to