I had this doubt for quite a long time.Could be absurd even but need the
solutions .
How do we compare efficiently compare 2 files each containing terabytes of
record ?
This could be related to external sorting as well. 
But couldnt find a efficeint solution to it.
Can somebody please help in understanding how to proceed? 
-- 
View this message in context: 
http://old.nabble.com/Compare-effectively-TerraBytesofRecords-with-another-Using-Hadoop-%28MapReduce%29--tp32503928p32503928.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to