No worries.

It helps a lot to keep track of what dimension you think each thing  
has, in a little comment like in my examples below.  After a while,  
you'll begin thinking naturally that way...

Most common operations have zero active dimensions, so every dimension  
is a thread dimension. (Multiplication is like that).  The collapse  
and reduce operators all have one active dimension (e.g. "maximum" and  
"sumover").

Cheers,
Craig


On Apr 16, 2008, at 6:10 AM, Ashish Mahabal wrote:
>
> Hello Craig,
>
> Thanks for the quick tutorial. That recipe worked and it has made  
> the program hugely faster. Thanks.
>
> Indeed I still have not come to terms with the dimension  
> manipulations and threading. Getting there, slowly.
>
> Cheers,
> ashish
>
> On Tue, 15 Apr 2008, Craig DeForest wrote:
>
>> You are looping because you are not trusting the threading engine.
>>
>> The strategy is to push your unused dimensions above the active  
>> dimensions of your operator, and then let them ride along.  Matrix  
>> multiplication has two active dimensions, so you have to add dummy  
>> dimensions to make your data into a collection of (1,n,x,y) column  
>> vectors and (n,1,x,y) row vectors.  Then you can just carry out  
>> your multiplication.  You probably want to get out of the habit of  
>> using 'slice' and into the habit of using "PDL::NiceSlice", which  
>> yields a much cleaner syntax.
>>
>>      $colvecs = $allims->(*1);                 # colvecs dims: (1,n,x,y)
>>      $rowvecs = $allims->(:,*1);               # rowvecs dims: (n,1,x,y)
>>      $result = $rowvecs x $matrix x $colvecs;  # result dims: (1,1,x,y)
>>      $out = $result->clump(3);                 # out dims: (x,y)
>>
>> A little more concise:
>>
>>      $vecs = $allims->(*1);
>>      $out = ($vecs->transpose x $matrix x $vecs)->clump(3);
>>
>> You can even do it in one expression if you prefer:
>>
>>      $out = ($allims->(:,*1) x $matrix x $allims->(*1))->clump(3);
>>
>>
>> BTW, your '->cat' call is doing nothing, since it sticks together a  
>> bunch of PDLs and you're just giving it one PDL.
>>
>> Cheers,
>> Craig
>>
>>
>> On Apr 15, 2008, at 12:49 PM, Ashish Mahabal wrote:
>>> Hi,
>>> I know there has to be a better, more succinct way of doing this,  
>>> but I
>>> just don't seem to get it, not having done any explicit threading:
>>> $allims is a stack of n images of size (xsize,ysize) from which I  
>>> need to
>>> pull out vectors that have the same pixel for each image in the  
>>> stack and
>>> do some matrix operations on it. The loops clearly slow it down.  
>>> How do I
>>> get rid of the loops? ($invcovari is a nxn matrix).
>>> my $out= ones($xsize, $ysize);
>>> for my $j (0..$xsize-1){
>>>       for my $k (0..$ysize-1){
>>>               my $vecf = (slice $allims,"*,$j,$k")->flat->cat;
>>>               my $vecft = $vecf->transpose;
>>>             (slice $out,"$j,$k") .= $vecf x ($invcovari x $vecft);
>>>       }
>>> }
>>> Thanks.
>>> -ashish
>>> Ashish Mahabal, Caltech Astronomy, Pasadena, CA 91125
>>> http://www.astro.caltech.edu/~aam aam at astro.caltech.edu
>>> Why is "abbreviation" such a long word?
>>> _______________________________________________
>>> Perldl mailing list
>>> [email protected]
>>> http://mailman.jach.hawaii.edu/mailman/listinfo/perldl
>>
>
> -ashish
>
> Ashish Mahabal, Caltech Astronomy, Pasadena, CA 91125
> http://www.astro.caltech.edu/~aam aam at astro.caltech.edu
>
> My computer isn't that nervous...it's just a bit ANSI.
>


_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl

Reply via email to