On May 23, Elias Assmann said:

>On Thu, 23 May 2002, Jeff 'japhy' Pinyan wrote:
>
>> On May 23, Craig Hammer said:
>>
>> >Very nice explanation.  One thing though, I am not using uniq to remove
>> >duplicates.  I am using it to get a count of duplicates.  In my case, I am
>> >creating a threshhold to determine when someone (malicious) is scanning my
>> >address ranges.
>>
>> Ah, I see.  Well then, you can use either method to obtain the count:
>>
>>   # a -- from perlfaq4
>>   my $prev = "NO_SUCH_VALUE";
>>   my $dup = 0;
>>   my @sorted = grep { $_ ne $prev ? $prev = $_ : ++$dup } sort @records;
>
>Correct me if I'm wrong, but I don't think this does what Craig wants,
>since $dup would end up containing all the values of %seen from b)
>added up, with no indication of how many values were seen more than
>once, how often each of these was seen or what those values were (all
>of which I assume to be interesting in this case).

Oh, well if that's what he wants, then the hash solution is the most
powerful:

  my %seen;
  $seen{$_}++ for @sorted = sort @records;

Now you have %seen, which holds each element of @sorted and how many times
it appeared.  Thus:

  @duplicates = grep $seen{$_} == 1, keys %seen;

and their corresponding counts:

  print "$_ seen $seen{$_} times\n" for @duplicates;

-- 
Jeff "japhy" Pinyan      [EMAIL PROTECTED]      http://www.pobox.com/~japhy/
RPI Acacia brother #734   http://www.perlmonks.org/   http://www.cpan.org/
** Look for "Regular Expressions in Perl" published by Manning, in 2002 **
<stu> what does y/// stand for?  <tenderpuss> why, yansliterate of course.
[  I'm looking for programming work.  If you like my work, let me know.  ]


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to