[
https://issues.apache.org/jira/browse/HADOOP-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575650#action_12575650
]
Hemanth Yamijala commented on HADOOP-2899:
------------------------------------------
We had a discussion with Sameer, and Devaraj on this issue. Logically, the
mapred system directory is something that the admin would set up in a static
job tracker case. In that case, he would be responsible for managing this. In
the HOD world, HOD is playing this role and generating the mapred system
directory. Hence, it feels like it should set up and clean it. However, in the
current design of HOD, this is not very easy to do in a short time. Therefore,
I am moving this to be fixed in Hadoop 0.17 in HOD. In the meantime, we can
write a simple utility to clean up these directories periodically. The impact
of not doing it immediately is reduced with such a utility / script.
> hdfs:///mapredsystem directory not cleaned up after deallocation
> -----------------------------------------------------------------
>
> Key: HADOOP-2899
> URL: https://issues.apache.org/jira/browse/HADOOP-2899
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/hod
> Affects Versions: 0.16.0
> Reporter: Luca Telloli
> Assignee: Devaraj Das
> Fix For: 0.17.0
>
>
> Each submitted job creates a hdfs:///mapredsystem directory, created by (I
> guess) the hodring process. Problem is that it's not cleaned up at the end of
> the process; a use case would be:
> - user A allocates a cluster, the hodring is svrX, so a /mapredsystem/srvX
> directory is created
> - user A deallocates the cluster, but that directory is not cleaned up
> - user B allocates a cluster, and the first node chosen as hodring is svrX,
> so hodring tries to write hdfs:///mapredsystem but it fails
> - allocation succeeds, but there's no hodring running; looking at
> 0-jobtracker/logdir/hadoop.log under the temporary directory I can read:
> 2008-02-26 17:28:42,567 WARN org.apache.hadoop.mapred.JobTracker: Error
> starting tracker: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.fs.permission.AccessControlException: Permission denied:
> user=B, access=WRITE, inode="mapredsystem":hadoop:supergroup:rwxr-xr-x
> I guess a possible solution would be to clean up those directories during the
> deallocation process.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.