2011-05-07 05:11, Yuri Pankov skrev:
On Sat, May 07, 2011 at 04:23:40AM +0200, Rolf Nielsen wrote:
2011-05-07 02:09, Rolf Nielsen skrev:
Hello all,
I have two text files, quite extensive ones. They have some lines in
common and some lines are unique to one of the files. The lines that do
exist in both files are not necessarily in the same location. Now I need
to compare the files and output a list of lines that exist in both
files. Is there a simple way to do this? diff? awk? sed? cmp? Or a
combination of two or more of them?
TIA,
Rolf
sort file1 file2 | uniq -d
I very seriously doubt that this line does what you want...
$ printf "a\na\na\nb\n"> file1; printf "c\nc\nb\n"> file2; sort file1 file2 |
uniq -d
a
b
c
Ok. I do understand the problem. Though the files I have do not have any
duplicate lines, so that possibility didn't even cross my mind.
Try this instead (probably bloated):
sort< file1 | uniq | tr -s '\n' '\0' | xargs -0 -I % grep -Fx % file2 | sort |
uniq
There is comm(1), of course, but it expects files to be already sorted.
The files are sorted, so comm would work. Several people have already
suggested comm, though I haven't tried it, as combining sort and uniq
does what I want with my specific files.
HTH,
Yuri
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"