Hello,
I am storing 400000 rows to an EArray as follows:
if grp.__contains__('normI'):
  fh.removeNode(grp,'normI')
fh.createEArray(grp,'normI',Float32Atom(), (0,512), expectedrows=800000)

... populate 400000 rows of normI array ...

When I use it as follows:
tmp = np.asarray(grp.normI[:,k])  # Grab the k'th column of the Earray
tmp = SomeCalculation(tmp) #this is very fast
grp.SomeCArray[:,k] = tmp #this is also very fast, but I am only storing ~100 
  # values, so I'm not sure if it actually has good performance or not


 it is horribly slow, the np.asarray call takes ~30 seconds, which is only
32Kbyte/s if only 400000*4 bytes are being read as it should be, but 16Mbyte/s
if all 512*4*400000 are being read, and then sliced. When I check the disk read
performance, I see that indeed it is reading continuously at around 16 Mbyte/s.
Am I doing something wrong?
Thank you,
Glenn


-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to