Thanks - just what I was looking for.

Bradford Carpenter wrote:
> On Thu, 22 Jun 2006 22:13:28 -0700, Marc Perkel wrote:
>
>   
>> Chris Meadors wrote:
>>     
>>>  Marc Perkel wrote:
>>>    
>>>       
>>>>  There are probably people out there who just know how to do this in a 
>>>>  simple way.
>>>>
>>>>  I have two files. Both files are text files that have IP addresses on 
>>>>  separate lines. Both are alphabetical. What I want to do is read file A 
>>>>  and file B and create file C that has all the IP addresses in file A 
>>>>  that do not match the addresses in file B.
>>>>
>>>>  Trying a new spam processing trick creating a whitelist of every IP 
>>>>  address where I got 10 or more hams and no spams. That way I can just 
>>>>  have a host whitelist that I don't have to run through spamassassin.
>>>>      
>>>>         
>>>  Is file B a true sub-set of file A?  That is it does not contain any
>>>  addresses that are not also in A?  And does each address only appear
>>>  once in each file?  If both of those are true, this will work:
>>>
>>>   cat fileA fileB | sort | uniq -u > fileC
>>>    
>>>       
>> Unfirtunately no. file B has addresses mot in file A.
>>     
>
> You might try comm. It needs two sorted files to compare, so you may want to 
> run sort on the files first to be sure the comparisons are valid. Maybe 
> something like:
>
> sort -u /path/to/fileA > /path/to/tempA
> sort -u /path/to/fileB > /path/to/tempB
> comm -23 /path/to/tempA /path/to/tempB > /path/to/fileC
>
> This will give you a list of the unique lines in fileA.
>
> Best regards,
> Brad Carpenter
>
>   
-- 
## List details at http://www.exim.org/mailman/listinfo/exim-users 
## Exim details at http://www.exim.org/
## Please use the Wiki with this list - http://www.exim.org/eximwiki/

Reply via email to