On Thu, 5 Sep 2002, Brent Dax wrote:
> Sean O'Rourke:
> # > # 4 - the other arrays boosted to the highest dimension
> # > It's already been defined to be #4.
> #
> # Argh. Then I need to whinge a bit -- what if it's a ragged
> # array? What if different elements have different dimensions
> # themselves, e.g. "[1,[2,3]]"? I think there's serious
> # can-of-worms potential here, and by making hyping more
> # "intelligent", we'll actually make it useful in fewer cases.
>
> I don't see any cases where it becomes *less* useful by making it more
> intelligent. Can you give an example?
I was thinking of something like this:
my (@xs, @ys) = two_arrays_of_arrays;
my @zs = @as ^dot_product @bs;
> As for the ragged array argument, I would argue for the following
> statement in the documentation:
>
> The behavior of hyperoperators with ragged and recursive
> data structures is undefined (and likely to be downright
> weird).
But we can do better than this if we (a) say that hyping only goes down
one level by default; and (b) reduce hyping (without complaint) to a
scalar operation when both operands are scalars.
> Absolutely. If we know the dimensions at compile-time, we should use
> that information. But I have a sneaking suspicion that that won't be
> the general case.
D'oh. I meant 'number of dimensions' rather than dimensions. Your
statement above seems right if and only if hyping applies to all
dimensions by default.
> c) possibly speed up execution (optimizer gooooood...)
We already get some benefit here. For more, you're likely to implement a
bona fide matrix or vector PMC, for which the normal operators will do
"hyper" things.
> # That would be pretty hard (I think), since it involves
> # coalescing two subs into one, then cleaning out a bunch of
> # temporaries.
>
> Perhaps. Looking for identical subs seems like an obvious size
> optimization to me, but I'm not really a compiler guy. :^)
You could be, any day you chose ;). At least as much as yours truly,
/s