Uri Guttman wrote:
i can't spit out the syntax but here is the conceptual way i would do
it. we do have multidimensional slices so we could grab each slice
(maybe with zip?) and pass that to [+] and then grab the list of results
back into a array/matrix with one less dimension than the original.

Yup. I wonder though if that formulation is made simpler by the notion of iterating a data structure at a given depth. It might even be useful for $data{$x}{$y}{$z}[$i] or similar, where e.g. map() could make a deep copy down to a certain depth where it then operates elementwise.

so it would be something like this: (very wacko pseudo code):

                @in[ * ; 2 ; * ] ==>
                map [+] ==>
                @out


One limiting thing about this is that the data being 3-dimensional and the focus on the second dimension are explicit in the syntax: presumably not possible to reduce a $j dimensional array over dimension $i in the same way. Equivalently, the autoindexing described in Synopsis 9 looks like it enables do -> $i, $j, $k { @sum[$i, $k] = @data[$i, $j, $k] }

which is very elegant.  But if I was wanting to write a library
which handles arbitrary data "cubes", it would be nice to specify
the dimensions using variables.

  LP> I think we're beginning to re-invent PDL.  Poorly.

but is there a p6 pdl yet? they may not need much with multi-dim ops,
slices, hyper and reduce all built in! also with type int (patform
ints), they can get the dense storage needed (but losing any dimensional
flexibility).

Yes. I don't quite understand that point. I rather see PDL as a limited version of MATLAB, and MATLAB as a limited version of APL. The right set of vector primitives can be combined powerfully in some very non-obvious ways (c.f. in fact the recent post by Edward Cherlin), and APL and successors have worked this out over 30+ years. To hold PDL as a design benchmark instead doesn't seem quite right.


Rgds

Anthony

Reply via email to