Re: file permission issue

2016-10-19 Thread CB
Thanks Ravi,

That's very helpful.

- Chansup

On Mon, Oct 17, 2016 at 6:36 PM, Ravi Prakash  wrote:

> Hi!
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-
> project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-
> server-nodemanager/src/main/java/org/apache/hadoop/yarn/
> server/nodemanager/containermanager/localizer/ResourceLocalizationService.
> java#L1524
>
> Just fyi, there are different kinds of distributed cache:
> http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/
> Here's a good article from Vinod.
>
> HTH
> Ravi
>
> On Mon, Oct 17, 2016 at 7:56 AM, CB  wrote:
>
>> Hi,
>>
>> I'm running Hadoop 2.7.1 release.
>> While I'm running a MapReduce job, I've encountered a file permission
>> issue as shown below because I'm working in an environment running Linux
>> where world permissions bits are disabled.
>>
>> 2016-10-14 15:51:45,333 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>> Writing credentials to the nmPrivate file /state/partition1/hadoop/nm-lo
>> cal-dir/nmPrivate/container_1476470591621_0004_02_01.tokens.
>> Credentials list:
>>
>> 2016-10-14 15:51:45,375 WARN org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.localizer.ResourceLocalizationService:
>> Permissions incorrectly set for dir 
>> /state/partition1/hadoop/nm-local-dir/usercache,
>> should be rwxr-xr-x, actual value = rwxr-x---
>>
>> Does any one have any suggestions to work around the issue for a single
>> user environment, where one user can running all the services and run the
>> Map-reduce jobs?
>>
>> I'm not familiar with the source code but if you suggest me where to
>> modify to relax the check, it would be appreciated.
>>
>> Thanks,
>> - Chansup
>>
>>
>


Re: file permission issue

2016-10-17 Thread Ravi Prakash
Hi!

https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java#L1524

Just fyi, there are different kinds of distributed cache:
http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/ Here's
a good article from Vinod.

HTH
Ravi

On Mon, Oct 17, 2016 at 7:56 AM, CB  wrote:

> Hi,
>
> I'm running Hadoop 2.7.1 release.
> While I'm running a MapReduce job, I've encountered a file permission
> issue as shown below because I'm working in an environment running Linux
> where world permissions bits are disabled.
>
> 2016-10-14 15:51:45,333 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Writing credentials to the nmPrivate file /state/partition1/hadoop/nm-
> local-dir/nmPrivate/container_1476470591621_0004_02_01.tokens.
> Credentials list:
>
> 2016-10-14 15:51:45,375 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.localizer.ResourceLocalizationService:
> Permissions incorrectly set for dir 
> /state/partition1/hadoop/nm-local-dir/usercache,
> should be rwxr-xr-x, actual value = rwxr-x---
>
> Does any one have any suggestions to work around the issue for a single
> user environment, where one user can running all the services and run the
> Map-reduce jobs?
>
> I'm not familiar with the source code but if you suggest me where to
> modify to relax the check, it would be appreciated.
>
> Thanks,
> - Chansup
>
>


Re: File Permission Issue using Distributed Cache of Hadoop-2.2.0

2014-05-30 Thread sam liu
My colleague found distcp tool does miss execution permission as well.

I also found some difference between hadoop 1.x and hadoop 2.x:
- On Hadoop 2.2.0, I can use command 'hadoop dfs -chmod 755 test01' to add
execution permission on hdfs file test01: '*-rwxr-xr-x*   1 admin
admin   8465 2014-05-30 16:45 test01'
- However, on Hadoop 1.1.1, I can not add execution on hdfs file using such
command

I have two further questions:

*- Does it means that HDFS 2.2.0 has execution permission, but HDFS 1.1.1
has not?*
*- If HDFS 2.2.0 has execution permission, how to keep the execution
permission after putting file onto hdfs?*

This issue confuses us for a long time, and any comments/suggestions will
be appreciated!


2014-05-30 17:03 GMT+08:00 sam liu :

> Hi,
>
> On Hadoop 1.1.1, I did a test on execution permission as below:
> 1. Set '*dfs.umaskmode*' to '*000*' in hdfs-site.xml
> 2. The permission of the test file on linux local file system is '
> *-rwxr-xr-x* 1 admin admin 12297  5?? 30 01:44 test'
> 3. Put the test file to hdfs using command 'hadoop dfs -put test test'
>
> Result:
> In hdfs, the permission of the uploaded file is '*-rw-rw-rw-*   1 admin
> supergroup  12297 2014-05-30 02:57 /user/admin/test'
>
> As the hdfs umask value is set to '000', I think the uploaded file's
> permission should be '-rwxr-xr-x' as well, but the result is different. Why?
>
>
> 2014-05-28 16:31 GMT+08:00 Sebastian Gäde :
>
> Hi,
>>
>>
>>
>> Not sure if this helps, in HDFS there is no execution permission since
>> you cannot execute files:
>>
>> https://issues.apache.org/jira/browse/HADOOP-3078
>>
>> https://issues.apache.org/jira/browse/HDFS-4659
>>
>>
>>
>> Cheers
>>
>> Seb.
>>
>>
>>
>> *From:* sam liu [mailto:samliuhad...@gmail.com]
>> *Sent:* Wednesday, May 28, 2014 7:40 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: File Permission Issue using Distributed Cache of
>> Hadoop-2.2.0
>>
>>
>>
>> Is this possible a Hadoop issue? Or any options is wrong in my cluster?
>>
>>
>>
>> 2014-05-27 13:58 GMT+08:00 sam liu :
>>
>> Hi Experts,
>>
>> The original local file has execution permission, and then it was
>> distributed to multiple nodemanager nodes with Distributed Cache feature of
>> Hadoop-2.2.0, but the distributed file has lost the execution permission.
>>
>> However I did not encounter such issue in Hadoop-1.1.1.
>>
>> Why this happened? Some changes about 'dfs.umask' option or related
>> staffs?
>>
>> Thanks!
>>
>>
>>
>
>


Re: File Permission Issue using Distributed Cache of Hadoop-2.2.0

2014-05-30 Thread sam liu
Hi,

On Hadoop 1.1.1, I did a test on execution permission as below:
1. Set '*dfs.umaskmode*' to '*000*' in hdfs-site.xml
2. The permission of the test file on linux local file system is '
*-rwxr-xr-x* 1 admin admin 12297  5?? 30 01:44 test'
3. Put the test file to hdfs using command 'hadoop dfs -put test test'

Result:
In hdfs, the permission of the uploaded file is '*-rw-rw-rw-*   1 admin
supergroup  12297 2014-05-30 02:57 /user/admin/test'

As the hdfs umask value is set to '000', I think the uploaded file's
permission should be '-rwxr-xr-x' as well, but the result is different. Why?


2014-05-28 16:31 GMT+08:00 Sebastian Gäde :

> Hi,
>
>
>
> Not sure if this helps, in HDFS there is no execution permission since you
> cannot execute files:
>
> https://issues.apache.org/jira/browse/HADOOP-3078
>
> https://issues.apache.org/jira/browse/HDFS-4659
>
>
>
> Cheers
>
> Seb.
>
>
>
> *From:* sam liu [mailto:samliuhad...@gmail.com]
> *Sent:* Wednesday, May 28, 2014 7:40 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: File Permission Issue using Distributed Cache of
> Hadoop-2.2.0
>
>
>
> Is this possible a Hadoop issue? Or any options is wrong in my cluster?
>
>
>
> 2014-05-27 13:58 GMT+08:00 sam liu :
>
> Hi Experts,
>
> The original local file has execution permission, and then it was
> distributed to multiple nodemanager nodes with Distributed Cache feature of
> Hadoop-2.2.0, but the distributed file has lost the execution permission.
>
> However I did not encounter such issue in Hadoop-1.1.1.
>
> Why this happened? Some changes about 'dfs.umask' option or related staffs?
>
> Thanks!
>
>
>


RE: File Permission Issue using Distributed Cache of Hadoop-2.2.0

2014-05-28 Thread Sebastian Gäde
Hi,

 

Not sure if this helps, in HDFS there is no execution permission since you 
cannot execute files:

https://issues.apache.org/jira/browse/HADOOP-3078

https://issues.apache.org/jira/browse/HDFS-4659

 

Cheers

Seb.

 

From: sam liu [mailto:samliuhad...@gmail.com] 
Sent: Wednesday, May 28, 2014 7:40 AM
To: user@hadoop.apache.org
Subject: Re: File Permission Issue using Distributed Cache of Hadoop-2.2.0

 

Is this possible a Hadoop issue? Or any options is wrong in my cluster?

 

2014-05-27 13:58 GMT+08:00 sam liu :

Hi Experts,

The original local file has execution permission, and then it was distributed 
to multiple nodemanager nodes with Distributed Cache feature of Hadoop-2.2.0, 
but the distributed file has lost the execution permission.

However I did not encounter such issue in Hadoop-1.1.1.

Why this happened? Some changes about 'dfs.umask' option or related staffs?

Thanks!

 



Re: File Permission Issue using Distributed Cache of Hadoop-2.2.0

2014-05-27 Thread sam liu
Is this possible a Hadoop issue? Or any options is wrong in my cluster?


2014-05-27 13:58 GMT+08:00 sam liu :

> Hi Experts,
>
> The original local file has execution permission, and then it was
> distributed to multiple nodemanager nodes with Distributed Cache feature of
> Hadoop-2.2.0, but the distributed file has lost the execution permission.
>
> However I did not encounter such issue in Hadoop-1.1.1.
>
> Why this happened? Some changes about 'dfs.umask' option or related staffs?
>
> Thanks!
>