Hi all,

Why does MapReduce handle input split files one line at a time?

Or is it not fast enough to read all the input split (ex: 128MB) and
put the data in memory and process the data (string) in memory?

If you are processing one row at a time, as in the current approach,
is not the Java application making line-by-line read system calls to
the operating system?

Best Regards,
Daegyu

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org

Reply via email to