Create test scenario for "distributed cache file behaviour, when dfs file is not modified" ------------------------------------------------------------------------------------------
Key: MAPREDUCE-1672 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1672 Project: Hadoop Map/Reduce Issue Type: New Feature Components: test Affects Versions: 0.22.0 Reporter: Iyappan Srinivasan Fix For: 0.22.0 This test scenario is for a distributed cache file behaviour when it is not modified before and after being accessed by maximum two jobs. Once a job uses a distributed cache file that file is stored in the mapred.local.dir. If the next job uses the same file, then that is not stored again. So, if two jobs choose the same tasktracker for their job execution then, the distributed cache file should not be found twice. This testcase should run a job with a distributed cache file. All the tasks' corresponding tasktracker's handle is got and checked for the presence of distributed cache with proper permissions in the proper directory. Next when job runs again and if any of its tasks hits the same tasktracker, which ran one of the task of the previous job, then that file should not be uploaded again and task use the old file. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.