Attila Sasvari created MAPREDUCE-6874:
-----------------------------------------

             Summary: Make DistributedCache check if the content of a directory 
has changed
                 Key: MAPREDUCE-6874
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6874
             Project: Hadoop Map/Reduce
          Issue Type: New Feature
            Reporter: Attila Sasvari


DistributedCache does not check recursively if the content a directory has 
changed when adding files to it with {{DistributedCache.addCacheFile()}}. 

h5. Background
I have an Oozie workflow on HDFS:
{code}
example_workflow
├── job.properties
├── lib
│   ├── components
│   │   ├── sub-component.sh
│   │   └── subsub
│   │       └── subsub.sh
│   ├── main.sh
│   └── sub.sh
└── workflow.xml
{code}
Executed the workflow; then made some changes in {{subsub.sh}}. Replaced the 
file on HDFS. When I re-ran the workflow, DistributedCache did not notice the 
changes as the timestamp on the {{components}} directory did not change. As a 
result, the old script was materialized.

This behaviour might be related to [determineTimestamps() 
|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java#L84].
In order to use the new script during workflow execution, I had to update the 
whole {{components}} directory.


h6. Some more info:
In Oozie, [DistributedCache.addCacheFile() 
|https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/action/hadoop/JavaActionExecutor.java#L625]
 is used to add files to the distributed cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to