Hi again,

> Choosing the slice size should be not difficult, just something that is 
> not too large or too small (anything between 1 MB ~ 10 MB should do 
> fine).  The only thing to have in mind is that your slices should not 
> exceed your available memory.  PyTables will automatically determine an 
> adequate HDF5 chunk size for your on-disk datasets.
> 
> 
> Uh, no.  tables.Expr only supports simple element-wise operations whose 
> output has the same shape than operands (so `nonzero` is not supported).  
> Also, it cannot carry out operations that makes use of different indices 
> in operands for computing some element (so `diff` is be supported 
> either).  Rather, you need to think about tables.Expr (and numexpr in 
> general) as a virtual machine that only accepts vectors (matrices) and 
> can perform operations only among elements in the same positions (mostly 
> like a SIMD processor).

Now I got it. Thanks. Would you recommend to use the same chunk approach as for 
data copy above to perform the "on-disk" threshold detection?

Best,

Barte


------------------------------------------------------------------------------
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to