Hi, I have a 3-node cluster, with JobTracker running on one machine and TaskTrackers on other two. Instead of using HDFS, I have written my own FileSystem implementation. I am able to run a MapReduce job on this cluster but I am not able to make out from logs or TaskTracker UI, which data sets were exactly processed by each of the two slaves.
Can you please tell me some way to find out what exactly did each of my tasktracker do during the entire job execution? I am using Hadoop-1.0.4 source code. Thanks & Regards, Nikhil
