I have a section of a program that is this:

        my $users = get_all_users();
        foreach my $user (@$users) {
                my $details = get_user_details($user);
                my $sum=0;
                my $count=0;
                for (my $i=1; $i<@$details; $i+=2) {
                        $sum += @$details[$i];
                        $count++;
                }
                $means{$user} = $sum/$count;
        }

The $users var is a reference to an array that is about 450,000 entries. The 
$details var is an array reference typically of size around 500 or so (note 
however, the inner loop goes up in twos). Overall the innermost part is 
executed 100 million times. How can I optimise it? If I was using C, I'd use 
pointer arithmetic on the array to save the multiplication on the array 
indexing. A naive reading of this implies that Perl would be doing a multiply 
for every array access, is this the case, or is it optimised away? If not, is 
there a way to use pointer arithmetic (or something that behaves similarly, 
but probably safer)? It is probably possible to restructure the array so that 
it can go up in ones, and do a foreach, but in most cases it'll be common to 
want both values, which is why I didn't do that.

Anyone know any good resources for this kind of thing?

-- 
Robin <[EMAIL PROTECTED]> JabberID: <[EMAIL PROTECTED]>

Hostes alienigeni me abduxerunt. Qui annus est?

PGP Key 0xA99CEB6D = 5957 6D23 8B16 EFAB FEF8  7175 14D3 6485 A99C EB6D

Attachment: pgpXx2mt8ra55.pgp
Description: PGP signature

Reply via email to