On Jan 23, 2008 11:10 AM, Brent Pedersen <[EMAIL PROTECTED]> wrote:
>
> On Jan 23, 2008 10:11 AM, Anselm Hook <[EMAIL PROTECTED]> wrote:
> > Thought I'd ask the list this question more directly:
> >
> > If you have a large cellular automata; such as say conways-life (or
> > something with perhaps a few more bits per pixel) - what is an efficient way
> > to represent this in memory?
> >
> > It seems to be similar to compressing an image.  There are a variety of
> > algorithms for compressing images.  The goal often seems to be to find
> > duplicate blocks.
> >
> > One constraint is that I want the data to be pixel addressable and speed is
> > critical since the data-set may be large.  The best performance is of course
> > linear time with no indirection ( pixel = memory[ x + y * stride ] ).
> >
> > This is intended to be used to simulate watersheds.
> >
> >  - a
> >
> >
> > _______________________________________________
> > Geowanking mailing list
> > [email protected]
> > http://lists.burri.to/mailman/listinfo/geowanking
> >
>
> hi, i dont know at all how to address your compression question, but
> re the simulation:
>
> if you can model the CA as a convolution, then you can let python do
> the work via numpy/scipy, specifically scipy.signal.convole2d()
> e.g:
> >>> grid = convolve2d(grid, kernel, mode='same', boundary='wrap')
>
>
> even if do need direct per-pixel access, there is excellent support
> for that in numpy arrays via a number of options:
> cython/pyrex, weave.inline, or pyinstant are all numpy-aware.
> this is a good reference:
> http://www.scipy.org/PerformancePython
>
> i dont know what dimensions you'll be dealing with but in my
> experience, this scales pretty well.
>
> -brentp
>

and (at the risk of sounding an over-zealous python fan), compression
and LRU cache for numpy arrays.
http://www.pytables.org/docs/manual/ch04.html#CArrayClassDescr
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking

Reply via email to