[
https://issues.apache.org/jira/browse/HDFS-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070135#comment-14070135
]
Hadoop QA commented on HDFS-6703:
---------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12657102/HDFS-6703.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs-nfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/7422//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7422//console
This message is automatically generated.
> NFS: Files can be deleted from a read-only mount
> ------------------------------------------------
>
> Key: HDFS-6703
> URL: https://issues.apache.org/jira/browse/HDFS-6703
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: nfs
> Affects Versions: 2.2.0
> Reporter: Abhiraj Butala
> Assignee: Srikanth Upputuri
> Attachments: HDFS-6703.patch
>
>
>
> As reported by bigdatagroup <[email protected]> on hadoop-users mailing
> list:
> {code}
> We exported our distributed filesystem with the following configuration
> (Managed by Cloudera Manager over CDH 5.0.1):
> <property>
> <name>dfs.nfs.exports.allowed.hosts</name>
> <value>192.168.0.153 ro</value>
> </property>
> As you can see, we expect the exported FS to be read-only, but in fact we are
> able to delete files and folders stored on it (where the user has the correct
> permissions), from the client machine that mounted the FS.
> Other writing operations are correctly blocked.
> Hadoop Version in use: 2.3.0+cdh5.0.1+567"
> {code}
> I was able to reproduce the issue on latest hadoop trunk. Though I could only
> delete files, deleting directories were correctly blocked:
> {code}
> abutala@abutala-vBox:/mnt/hdfs$ mount | grep 127
> 127.0.1.1:/ on /mnt/hdfs type nfs (rw,vers=3,proto=tcp,nolock,addr=127.0.1.1)
> abutala@abutala-vBox:/mnt/hdfs$ ls -lh
> total 512
> -rw-r--r-- 1 abutala supergroup 0 Jul 17 18:51 abc.txt
> drwxr-xr-x 2 abutala supergroup 64 Jul 17 18:31 temp
> abutala@abutala-vBox:/mnt/hdfs$ rm abc.txt
> abutala@abutala-vBox:/mnt/hdfs$ ls
> temp
> abutala@abutala-vBox:/mnt/hdfs$ rm -r temp
> rm: cannot remove `temp': Permission denied
> abutala@abutala-vBox:/mnt/hdfs$ ls
> temp
> abutala@abutala-vBox:/mnt/hdfs$
> {code}
> Contents of hdfs-site.xml:
> {code}
> <configuration>
> <property>
> <name>dfs.nfs3.dump.dir</name>
> <value>/tmp/.hdfs-nfs3</value>
> </property>
> <property>
> <name>dfs.nfs.exports.allowed.hosts</name>
> <value>localhost ro</value>
> </property>
> </configuration>
> {code}
--
This message was sent by Atlassian JIRA
(v6.2#6252)