On Oct 15, 2007, at 6:57 AM, Ming Yang wrote:

I was writing a test mapreduce program and noticed that the
input file was always broken down into separate lines and fed
to the mapper. However, in my case I need to process the whole
file in the mapper since there are some dependency between
lines in the input file. Is there any way I can achieve this --
process the whole input file, either text or binary, in the mapper?

http://wiki.apache.org/lucene-hadoop/FAQ#10

Reply via email to