[jira] [Updated] (HDFS-6582) Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)

2014-08-06 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6582:
-

Attachment: HDFS-6582.patch

Attaching a patch which checks fis and returns NFS3ERR_ACCES if its null. Also 
updated the corresponding unit test. 

Though I am not sure if there are any other conditions (apart from user not 
having permission) that can cause FSDataInputStream to be null. How did you hit 
this issue [~tedyu]?

Let me know what you guys think. Thanks!

 Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)
 ---

 Key: HDFS-6582
 URL: https://issues.apache.org/jira/browse/HDFS-6582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Ted Yu
Priority: Minor
 Attachments: HDFS-6582.patch


 Around line 691:
 {code}
 FSDataInputStream fis = clientCache.getDfsInputStream(userName,
 Nfs3Utils.getFileIdPath(handle));
 try {
   readCount = fis.read(offset, readbuffer, 0, count);
 {code}
 fis may be null, leading to NullPointerException



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6582) Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)

2014-08-06 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6582:
-

Assignee: Abhiraj Butala
  Status: Patch Available  (was: Open)

 Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)
 ---

 Key: HDFS-6582
 URL: https://issues.apache.org/jira/browse/HDFS-6582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Ted Yu
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6582.patch


 Around line 691:
 {code}
 FSDataInputStream fis = clientCache.getDfsInputStream(userName,
 Nfs3Utils.getFileIdPath(handle));
 try {
   readCount = fis.read(offset, readbuffer, 0, count);
 {code}
 fis may be null, leading to NullPointerException



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6582) Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)

2014-08-06 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087915#comment-14087915
 ] 

Abhiraj Butala commented on HDFS-6582:
--

The failures seem unrelated to the patch. These tests pass for me locally. 
There is some issue in cluster setup in automation:

{code}
Error Message

asf901.ygridcore.net: asf901.ygridcore.net
Stacktrace

java.net.UnknownHostException: asf901.ygridcore.net: asf901.ygridcore.net
at java.net.InetAddress.getLocalHost(InetAddress.java:1402)
at 
org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1936)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1337)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:728)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:378)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:359)
at org.apache.hadoop.hdfs.nfs.TestMountd.testStart(TestMountd.java:42)

{code}

 Missing null check in RpcProgramNfs3#read(XDR, SecurityHandler)
 ---

 Key: HDFS-6582
 URL: https://issues.apache.org/jira/browse/HDFS-6582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Ted Yu
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6582.patch


 Around line 691:
 {code}
 FSDataInputStream fis = clientCache.getDfsInputStream(userName,
 Nfs3Utils.getFileIdPath(handle));
 try {
   readCount = fis.read(offset, readbuffer, 0, count);
 {code}
 fis may be null, leading to NullPointerException



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Attachment: HDFS-6451.002.patch

Attaching a patch which addresses the review comments by [~brandonli]. Added 
tests for all the handlers in TestRpcProgramNfs3.java. Kept the tests generic, 
so they can be extended in future to include other tests (various corner cases, 
other NFS3ERR* messages, etc). 

While testing read() I hit HDFS-6582. I have made a note of this and commented 
that specific test for now. 

Let me know if there are any suggestions. Thanks!

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083946#comment-14083946
 ] 

Abhiraj Butala commented on HDFS-6451:
--

Forgot to mention, I also cleaned up some white spaces in RpcProgramNf3.java. 
Please forgive me for that. :) 

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Attachment: HDFS-6451.003.patch

Reattaching the patch with findbug warning addressed.

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.003.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-07-31 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081223#comment-14081223
 ] 

Abhiraj Butala commented on HDFS-6451:
--

Thanks for your feedback [~brandonli]. I am currently working on the unit tests 
and shall upload the patch soon.

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-07-28 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Attachment: HDFS-6451.patch

Attaching one version of the fix, which introduces a function to check the 
IOException type and return the NFS3Status code accordingly. 

Please note that, the function returns NFS3ERR_ACCES for AccessControlException 
(instead of NFS3ERR_PERM), as that gave 'Permission denied' responses on my 
setup as shown below (this is also in sync with some of the discussion in HDFS 
6411):

{code}
$ ls
log4j.properties  loglog
$ mv log4j.properties abc
mv: cannot move `log4j.properties' to `abc': Permission denied
$ ln -s log4j.properties abc
ln: failed to create symbolic link `abc': Permission denied
$ rm log4j.properties
rm: remove write-protected regular file `log4j.properties'? y
rm: cannot remove `log4j.properties': Permission denied
{code}

Please let me know if there are any suggestions. Thank you!

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-07-28 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Status: Patch Available  (was: Open)

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-22 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071236#comment-14071236
 ] 

Abhiraj Butala commented on HDFS-6455:
--

Thanks for reviewing the patch [~brandonli]. As a part of fix to HDFS-6456, I 
have already added a unit test which also checks for invalid separator (given 
below) in allowed.hosts . Let me know if there is any other test case I should 
add.

{code}
+  @Test(expected=IllegalArgumentException.class)
+  public void testInvalidSeparator() {
+  NfsExports matcher = new NfsExports(CacheSize, ExpirationPeriod,
+foo ro : bar rw);
+  }
{code}

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-22 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala reassigned HDFS-6455:


Assignee: Abhiraj Butala

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6446) NFS: Different error messages for appending/writing data from read only mount

2014-07-20 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067847#comment-14067847
 ] 

Abhiraj Butala commented on HDFS-6446:
--

Hey [~yeshavora], I am not able to reproduce this issue on latest hadoop trunk. 
Following are some outputs:

{code}
abutala@abutala-vBox:/mnt/hdfs$ mount | grep hdfs
127.0.1.1:/ on /mnt/hdfs type nfs (rw,vers=3,proto=tcp,nolock,addr=127.0.1.1)
abutala@abutala-vBox:/mnt/hdfs$ ls -lh
total 512
-rw-r--r-- 1 abutala supergroup 12M Jul 19 11:36 abc.txt
drwxr-xr-x 3 abutala supergroup  96 Jul 19 12:10 temp
abutala@abutala-vBox:/mnt/hdfs$ cp ~/work/hbase/hbase.tar.gz abc.txt
cp: cannot create regular file `abc.txt': Permission denied
abutala@abutala-vBox:/mnt/hdfs$ cat ~/work/hbase/hbase.tar.gz  abc.txt
cat: write error: Permission denied
{code}

Am I missing anything? Or perhaps the issue got fixed recently? Let me know if 
you can still reproduce it. Thank you!

 NFS: Different error messages for appending/writing data from read only mount
 -

 Key: HDFS-6446
 URL: https://issues.apache.org/jira/browse/HDFS-6446
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora

 steps:
 1) set dfs.nfs.exports.allowed.hosts = nfs_client ro
 2) Restart nfs server
 3) Append data on file present on hdfs from read only mount point
 Append data
 {noformat}
 bash$ cat /tmp/tmp_10MB.txt  /tmp/tmp_mnt/expected_data_stream
 cat: write error: Input/output error
 {noformat}
 4) Write data from read only mount point
 Copy data
 {noformat}
 bash$ cp /tmp/tmp_10MB.txt /tmp/tmp_mnt/tmp/
 cp: cannot create regular file `/tmp/tmp_mnt/tmp/tmp_10MB.txt': Permission 
 denied
 {noformat}
 Both operations are treated differently. Copying data returns valid error 
 message: 'Permission denied' . Though append data does not return valid error 
 message



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6703) NFS: Files can be deleted from a read-only mount

2014-07-18 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14066586#comment-14066586
 ] 

Abhiraj Butala commented on HDFS-6703:
--

Sure [~usrikanth]! Feel free to submit a patch when you are ready. The fix 
should be on similar lines to what you have proposed.

 NFS: Files can be deleted from a read-only mount
 

 Key: HDFS-6703
 URL: https://issues.apache.org/jira/browse/HDFS-6703
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Abhiraj Butala
Assignee: Srikanth Upputuri

   
 As reported by bigdatagroup bigdatagr...@itecons.it on hadoop-users mailing 
 list:
 {code}
 We exported our distributed filesystem with the following configuration 
 (Managed by Cloudera Manager over CDH 5.0.1):
  property
 namedfs.nfs.exports.allowed.hosts/name
 value192.168.0.153 ro/value
   /property
 As you can see, we expect the exported FS to be read-only, but in fact we are 
 able to delete files and folders stored on it (where the user has the correct 
 permissions), from  the client machine that mounted the FS.
 Other writing operations are correctly blocked.
 Hadoop Version in use: 2.3.0+cdh5.0.1+567
 {code}
 I was able to reproduce the issue on latest hadoop trunk. Though I could only 
 delete files, deleting directories were correctly blocked:
 {code}
 abutala@abutala-vBox:/mnt/hdfs$ mount | grep 127
 127.0.1.1:/ on /mnt/hdfs type nfs (rw,vers=3,proto=tcp,nolock,addr=127.0.1.1)
 abutala@abutala-vBox:/mnt/hdfs$ ls -lh
 total 512
 -rw-r--r-- 1 abutala supergroup  0 Jul 17 18:51 abc.txt
 drwxr-xr-x 2 abutala supergroup 64 Jul 17 18:31 temp
 abutala@abutala-vBox:/mnt/hdfs$ rm abc.txt
 abutala@abutala-vBox:/mnt/hdfs$ ls
 temp
 abutala@abutala-vBox:/mnt/hdfs$ rm -r temp
 rm: cannot remove `temp': Permission denied
 abutala@abutala-vBox:/mnt/hdfs$ ls
 temp
 abutala@abutala-vBox:/mnt/hdfs$
 {code}
 Contents of hdfs-site.xml:
 {code}
 configuration
 property
 namedfs.nfs3.dump.dir/name
 value/tmp/.hdfs-nfs3/value
 /property
 property
 namedfs.nfs.exports.allowed.hosts/name
 valuelocalhost ro/value
 /property
 /configuration
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6703) NFS: Files can be deleted from a read-only mount

2014-07-17 Thread Abhiraj Butala (JIRA)
Abhiraj Butala created HDFS-6703:


 Summary: NFS: Files can be deleted from a read-only mount
 Key: HDFS-6703
 URL: https://issues.apache.org/jira/browse/HDFS-6703
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Abhiraj Butala


  



As reported by bigdatagroup bigdatagr...@itecons.it on hadoop-users mailing 
list:
{code}
We exported our distributed filesystem with the following configuration 
(Managed by Cloudera Manager over CDH 5.0.1):

 property
namedfs.nfs.exports.allowed.hosts/name
value192.168.0.153 ro/value
  /property

As you can see, we expect the exported FS to be read-only, but in fact we are 
able to delete files and folders stored on it (where the user has the correct 
permissions), from  the client machine that mounted the FS.
Other writing operations are correctly blocked.

Hadoop Version in use: 2.3.0+cdh5.0.1+567
{code}

I was able to reproduce the issue on latest hadoop trunk. Though I could only 
delete files, deleting directories were correctly blocked:

{code}
abutala@abutala-vBox:/mnt/hdfs$ mount | grep 127
127.0.1.1:/ on /mnt/hdfs type nfs (rw,vers=3,proto=tcp,nolock,addr=127.0.1.1)

abutala@abutala-vBox:/mnt/hdfs$ ls -lh
total 512
-rw-r--r-- 1 abutala supergroup  0 Jul 17 18:51 abc.txt
drwxr-xr-x 2 abutala supergroup 64 Jul 17 18:31 temp

abutala@abutala-vBox:/mnt/hdfs$ rm abc.txt

abutala@abutala-vBox:/mnt/hdfs$ ls
temp

abutala@abutala-vBox:/mnt/hdfs$ rm -r temp
rm: cannot remove `temp': Permission denied

abutala@abutala-vBox:/mnt/hdfs$ ls
temp

abutala@abutala-vBox:/mnt/hdfs$
{code}

Contents of hdfs-site.xml:

{code}
configuration
property
namedfs.nfs3.dump.dir/name
value/tmp/.hdfs-nfs3/value
/property
property
namedfs.nfs.exports.allowed.hosts/name
valuelocalhost ro/value
/property

/configuration
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-16 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6455:
-

Attachment: HDFS-6455.002.patch

Thanks [~brandonli], I rebased the patch and addressed the showmount timeout 
issue. This is how the showmount output looks like now:

{code}
showmount -e 127.0.1.1
rpc mount export: RPC: Procedure unavailable
{code}

So to summarize the patch does following:
1. Catch IllegalArgumentException, log it and return null NFS export instead of 
exiting the program.
2. Handle null NFS exports at appropriate places to avoid NullPointerException.

Let me know if anything. Thanks again!

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging

2014-07-14 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6378:
-

Attachment: HDFS-6378.003.patch

Thanks [~brandonli]. Attaching a new patch which addresses your comments. 
Replaced the System.exit() calls with terminate(), to address the findbug 
warnings. Let me know if anything! 

 NFS: when portmap/rpcbind is not available, NFS registration should timeout 
 instead of hanging 
 ---

 Key: HDFS-6378
 URL: https://issues.apache.org/jira/browse/HDFS-6378
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
Assignee: Abhiraj Butala
 Attachments: HDFS-6378.002.patch, HDFS-6378.003.patch, HDFS-6378.patch


 When portmap/rpcbind is not available, NFS could be stuck at registration. 
 Instead, NFS gateway should shut down automatically with proper error message.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6675) NFS: Fix javadoc warning in RpcProgram.java

2014-07-14 Thread Abhiraj Butala (JIRA)
Abhiraj Butala created HDFS-6675:


 Summary: NFS: Fix javadoc warning in RpcProgram.java
 Key: HDFS-6675
 URL: https://issues.apache.org/jira/browse/HDFS-6675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, nfs
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial


Fix following javadoc warning during hadoop-nfs compilation:

{code}
:
:
[WARNING] Javadoc Warnings
[WARNING] 
/home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
 warning - @param argument DatagramSocket is not a parameter name.
{code}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6675) NFS: Fix javadoc warning in RpcProgram.java

2014-07-14 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6675:
-

Status: Patch Available  (was: Open)

 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HDFS-6675
 URL: https://issues.apache.org/jira/browse/HDFS-6675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, nfs
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6675) NFS: Fix javadoc warning in RpcProgram.java

2014-07-14 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6675:
-

Attachment: HDFS-6675.patch

Attaching a simple patch to fix this. No updates to tests, as it is 
documentation change.

 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HDFS-6675
 URL: https://issues.apache.org/jira/browse/HDFS-6675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, nfs
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6675) NFS: Fix javadoc warning in RpcProgram.java

2014-07-14 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061484#comment-14061484
 ] 

Abhiraj Butala commented on HDFS-6675:
--

Oh I see. It could be. My JAVA_HOME is set to java-1.7.0-openjdk-amd64 . I 
looked up online, found a few similar jiras: HDFS-344, HBASE-7895, GIRAPH-130. 

Thanks for taking a look [~ajisakaa]  [~wheat9]!


 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HDFS-6675
 URL: https://issues.apache.org/jira/browse/HDFS-6675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, nfs
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-13 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6456:
-

Attachment: HDFS-6456.patch

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6456.patch


 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-13 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6456:
-

Status: Patch Available  (was: Open)

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6456.patch


 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-09 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055897#comment-14055897
 ] 

Abhiraj Butala commented on HDFS-6455:
--

Thanks for reviewing the patch [~brandonli]. This is the output of showmount 
command:

{code}
abutala@abutala-vBox:~$ showmount -e 127.0.1.1
rpc mount export: RPC: Timed out
{code}

I don't see any errors or log messages in NFS server output. What should be the 
correct behavior of showmount in this case?

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
 Attachments: HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-07 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14054019#comment-14054019
 ] 

Abhiraj Butala commented on HDFS-6456:
--

Ok got it. Thanks for the feedback [~brandonli] and [~yeshavora], I shall work 
on the fix shortly.

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora

 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-06 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6455:
-

Attachment: HDFS-6455.patch

Thanks for the feedback Brandon. Attaching a patch which catches the exception 
and logs it as ERROR in logs. NFS server does not exit now. 'showmount' command 
times out when invalid exports are provided.  

Output of 'hdfs nfs3':
{code}
14/07/06 02:13:37 INFO nfs3.Nfs3Base: registered UNIX signal handlers for 
[TERM, HUP, INT]
14/07/06 02:13:38 INFO oncrpc.RpcProgram: Will accept client connections from 
unprivileged ports
14/07/06 02:13:38 INFO nfs3.IdUserGroup: Not doing static UID/GID mapping 
because '/etc/nfs.map' does not exist.
14/07/06 02:13:38 INFO nfs3.IdUserGroup: Updated user map size: 36
14/07/06 02:13:38 INFO nfs3.IdUserGroup: Updated group map size: 65
14/07/06 02:13:38 ERROR nfs.NfsExports: Invalid NFS Exports provided:
java.lang.IllegalArgumentException: Incorrectly formatted line 'abc ro :foobar 
rw'
at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:363)
at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:158)
at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:57)
at 
org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:177)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:45)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
14/07/06 02:13:38 INFO nfs3.WriteManager: Stream timeout is 60ms.
14/07/06 02:13:38 INFO nfs3.WriteManager: Maximum open streams is 256
14/07/06 02:13:38 INFO nfs3.OpenFileCtxCache: Maximum open streams is 256
14/07/06 02:13:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
14/07/06 02:13:39 INFO nfs3.RpcProgramNfs3: Delete current dump directory 
/tmp/.hdfs-nfs3
14/07/06 02:13:39 INFO nfs3.RpcProgramNfs3: Create new dump directory 
/tmp/.hdfs-nfs3
14/07/06 02:13:39 INFO nfs3.Nfs3Base: NFS server port set to: 2049
14/07/06 02:13:39 INFO oncrpc.RpcProgram: Will accept client connections from 
unprivileged ports
14/07/06 02:13:39 ERROR nfs.NfsExports: Invalid NFS Exports provided:
java.lang.IllegalArgumentException: Incorrectly formatted line 'abc ro :foobar 
rw'
at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:363)
at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:158)
at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:57)
at 
org.apache.hadoop.hdfs.nfs.mount.RpcProgramMountd.init(RpcProgramMountd.java:88)
at org.apache.hadoop.hdfs.nfs.mount.Mountd.init(Mountd.java:37)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:46)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:66)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
14/07/06 02:13:40 INFO oncrpc.SimpleUdpServer: Started listening to UDP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/06 02:13:40 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/06 02:13:40 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 2049 for Rpc program: NFS3 at localhost:2049 with workerCount 0
{code}

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
 Attachments: HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at 

[jira] [Updated] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-07-06 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6455:
-

Status: Patch Available  (was: Open)

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
 Attachments: HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-06 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14053087#comment-14053087
 ] 

Abhiraj Butala commented on HDFS-6456:
--

I am not sure if this is a valid issue. 'host1-rw' itself is treated as the 
hostname. Since the hostname and the permissions are not separated by white 
spaces, it is treated as if permissions are not provided for this hostname. By 
default, read-only permission is assumed in this case.

Following are relevant comments from NfsExports.java:

{code}
/**
   * Loading a matcher from a string. The default access privilege is read-only.
   * The string contains 1 or 2 parts, separated by whitespace characters, where
   * the first part specifies the client hosts, and the second part (if
   * existent) specifies the access privilege of the client hosts. I.e.,
   *
   * client-hosts [access-privilege]
   */
{code} 

The fix for HDFS-6455 shall log an error for invalid exports, like invalid 
separators. 

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora

 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging

2014-07-02 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6378:
-

Attachment: HDFS-6378.002.patch

Thanks for the feedback Brandon. Attaching a patch which addresses your 
comments. Also, I moved the Shutdown hook registration to be after the 
registration itself, because it would be redundant if the registration itself 
exited with exception. Finally, I have removed some extra white spaces.

Here are some outputs:

1. 'hdfs nfs3' without starting 'hdfs portmap':

{code}
14/07/02 02:02:01 INFO nfs3.Nfs3Base: NFS server port set to: 2049
14/07/02 02:02:01 INFO oncrpc.RpcProgram: Will accept client connections from 
unprivileged ports
14/07/02 02:02:02 INFO oncrpc.SimpleUdpServer: Started listening to UDP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/02 02:02:02 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/02 02:02:03 ERROR oncrpc.RpcProgram: Registration failure with 
localhost:4242, portmap entry: (PortmapMapping-15:1:17:4242)
14/07/02 02:02:03 FATAL mount.MountdBase: Failed to start the server. Cause:
java.lang.RuntimeException: Registration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:79)
at 
org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:55)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:68)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 5 more
14/07/02 02:02:03 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down Nfs3 at abutala-vBox/127.0.1.1
/
{code}

2. Started both portmap and nfs3. Then closed portmap first and did Ctrl-C on 
nfs3:

{code}
14/07/02 02:02:41 INFO nfs3.Nfs3Base: NFS server port set to: 2049
14/07/02 02:02:41 INFO oncrpc.RpcProgram: Will accept client connections from 
unprivileged ports
14/07/02 02:02:42 INFO oncrpc.SimpleUdpServer: Started listening to UDP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/02 02:02:42 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/07/02 02:02:42 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 2049 for Rpc program: NFS3 at localhost:2049 with workerCount 0
^C14/07/02 02:02:52 ERROR nfs3.Nfs3Base: RECEIVED SIGNAL 2: SIGINT
14/07/02 02:02:53 ERROR oncrpc.RpcProgram: Unregistration failure with 
localhost:4242, portmap entry: (PortmapMapping-15:1:17:4242)
14/07/02 02:02:53 WARN util.ShutdownHookManager: ShutdownHook 'Unregister' 
failed, java.lang.RuntimeException: Unregistration failure
java.lang.RuntimeException: Unregistration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at org.apache.hadoop.mount.MountdBase$Unregister.run(MountdBase.java:98)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 3 more
14/07/02 02:02:53 ERROR oncrpc.RpcProgram: Unregistration failure with 
localhost:2049, portmap entry: (PortmapMapping-13:3:6:2049)
14/07/02 02:02:53 WARN util.ShutdownHookManager: ShutdownHook 'Unregister' 
failed, java.lang.RuntimeException: Unregistration failure
java.lang.RuntimeException: Unregistration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at 

[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in allowed.hosts

2014-06-30 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047486#comment-14047486
 ] 

Abhiraj Butala commented on HDFS-6455:
--

Hey Brandon, I compared this with the NN's behavior for illegal arguments and 
even it throws IllegalArgumentException. NN's main catches this exception, logs 
a FATAL message (making sure it gets added in logs) and then calls terminate(). 
Should we similarly add a log message here instead before exiting? 

Also, as per your comments in HDFS-6456, shutdown may not be desirable for 
incorrect exports as we want to support refreshing them without restarting NFS 
gateway. So we should make sure we catch the exception appropriately, log it 
and not shutdown here? Please advise. Thank you! 

 NFS: Exception should be added in NFS log for invalid separator in 
 allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora

 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65536
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging

2014-06-29 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6378:
-

Attachment: HDFS-6378.patch

Attaching a simple patch to add a timeout to DatagramSocket which otherwise 
blocks indefinitely on receive(). I have kept the timeout to be 500ms, let me 
know if it should be changed to something more appropriate. 

Ctrl-C is now able to kill NFS gateway if portmap is not running or is exited. 
Note that, an exception is logged when portmap is not running, but NFS gateway 
does not exit until Ctrl-C is pressed.

Output logs:
{code}
14/06/29 03:11:46 INFO oncrpc.SimpleUdpServer: Started listening to UDP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/06/29 03:11:46 INFO oncrpc.SimpleTcpServer: Started listening to TCP 
requests at port 4242 for Rpc program: mountd at localhost:4242 with 
workerCount 1
14/06/29 03:11:46 ERROR oncrpc.RpcProgram: Registration failure with 
localhost:4242, portmap entry: (PortmapMapping-15:1:17:4242)
java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:77)
at 
org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:55)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:68)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
Exception in thread main java.lang.RuntimeException: Registration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:77)
at 
org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:55)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:68)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:72)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 5 more
^C14/06/29 03:18:51 ERROR nfs3.Nfs3Base: RECEIVED SIGNAL 2: SIGINT
14/06/29 03:18:52 ERROR oncrpc.RpcProgram: Unregistration failure with 
localhost:4242, portmap entry: (PortmapMapping-15:1:17:4242)
java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at org.apache.hadoop.mount.MountdBase$Unregister.run(MountdBase.java:90)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
14/06/29 03:18:52 WARN util.ShutdownHookManager: ShutdownHook 'Unregister' 
failed, java.lang.RuntimeException: Unregistration failure
java.lang.RuntimeException: Unregistration failure
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:135)
at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
at org.apache.hadoop.mount.MountdBase$Unregister.run(MountdBase.java:90)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at 
java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:145)
at java.net.DatagramSocket.receive(DatagramSocket.java:786)
at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:66)
at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
... 3 more
14/06/29 03:18:52 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down Nfs3 at 

[jira] [Updated] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging

2014-06-29 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6378:
-

Status: Patch Available  (was: Open)

 NFS: when portmap/rpcbind is not available, NFS registration should timeout 
 instead of hanging 
 ---

 Key: HDFS-6378
 URL: https://issues.apache.org/jira/browse/HDFS-6378
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6378.patch


 When portmap/rpcbind is not available, NFS could be stuck at registration. 
 Instead, NFS gateway should shut down automatically with proper error message.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6416) Use Time#monotonicNow in OpenFileCtx and OpenFileCtxCatch to avoid system clock bugs

2014-05-22 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6416:
-

Attachment: HDFS-6416.patch

Attaching a patch which uses Time.monotonicNow() in OpenFileCtx.java and 
OpenFileCtxCache.java as suggested.

 Use Time#monotonicNow in OpenFileCtx and OpenFileCtxCatch to avoid system 
 clock bugs
 

 Key: HDFS-6416
 URL: https://issues.apache.org/jira/browse/HDFS-6416
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.4.0
Reporter: Brandon Li
Priority: Minor
 Attachments: HDFS-6416.patch


 As [~cnauroth]  pointed out in HADOOP-10612,  Time#monotonicNow is a more 
 preferred method to use since this isn't subject to system clock bugs (i.e. 
 Someone resets the clock to a time in the past, and then updates don't happen 
 for a long time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6416) Use Time#monotonicNow in OpenFileCtx and OpenFileCtxCatch to avoid system clock bugs

2014-05-22 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6416:
-

Status: Patch Available  (was: Open)

 Use Time#monotonicNow in OpenFileCtx and OpenFileCtxCatch to avoid system 
 clock bugs
 

 Key: HDFS-6416
 URL: https://issues.apache.org/jira/browse/HDFS-6416
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.4.0
Reporter: Brandon Li
Priority: Minor
 Attachments: HDFS-6416.patch


 As [~cnauroth]  pointed out in HADOOP-10612,  Time#monotonicNow is a more 
 preferred method to use since this isn't subject to system clock bugs (i.e. 
 Someone resets the clock to a time in the past, and then updates don't happen 
 for a long time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6408) Redundant definitions in log4j.properties

2014-05-17 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6408:
-

Attachment: HDFS-6408.patch

Attaching a simple patch to remove the redundant definitions.

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6408) Redundant definitions in log4j.properties

2014-05-17 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6408:
-

Assignee: Abhiraj Butala
  Status: Patch Available  (was: Open)

 Redundant definitions in log4j.properties
 -

 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Minor
 Attachments: HDFS-6408.patch


 Following definitions in 
 'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
 defined twice and should be removed:
 {code}
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
 [%t:%C{1}@%L] - %m%n
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-05-16 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998917#comment-13998917
 ] 

Abhiraj Butala commented on HDFS-6263:
--

Thanks [~ajisakaa], I have opened HDFS-6408 for the same and shall provide a 
patch there soon.

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HDFS-6263
 URL: https://issues.apache.org/jira/browse/HDFS-6263
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6263.patch


 HDFS-side of HADOOP-10525.
 {code}
 # uncomment the next line to limit number of backup files
 # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 {code}
 In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
 above lines should be removed because the appender (DRFA) doesn't support 
 MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5683) Better audit log messages for caching operations

2014-05-16 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000251#comment-14000251
 ] 

Abhiraj Butala commented on HDFS-5683:
--

No problem Andrew, thank you for reviewing!

 Better audit log messages for caching operations
 

 Key: HDFS-5683
 URL: https://issues.apache.org/jira/browse/HDFS-5683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
Assignee: Abhiraj Butala
  Labels: caching
 Fix For: 2.5.0

 Attachments: HDFS-5683.001.patch


 Right now the caching audit logs aren't that useful, e.g.
 {noformat}
 2013-12-18 14:14:54,423 INFO  FSNamesystem.audit 
 (FSNamesystem.java:logAuditMessage(7362)) - allowed=true ugi=andrew 
 (auth:SIMPLE)ip=/127.0.0.1   cmd=addCacheDirective   src=null
 dst=nullperm=null
 {noformat}
 It'd be good to include some more information when possible, like the path, 
 pool, id, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6408) Redundant definitions in log4j.properties

2014-05-16 Thread Abhiraj Butala (JIRA)
Abhiraj Butala created HDFS-6408:


 Summary: Redundant definitions in log4j.properties
 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Priority: Minor


Following definitions in 
'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
defined twice and should be removed:

{code}
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-05-15 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6263:
-

Status: Patch Available  (was: Open)

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HDFS-6263
 URL: https://issues.apache.org/jira/browse/HDFS-6263
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6263.patch


 HDFS-side of HADOOP-10525.
 {code}
 # uncomment the next line to limit number of backup files
 # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 {code}
 In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
 above lines should be removed because the appender (DRFA) doesn't support 
 MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-05-15 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6263:
-

Attachment: HDFS-6263.patch

Attaching a simple patch to remove the config.

Also, I noticed that in the same log4j.properties, the following  two configs 
are defined twice at the end of the file:

{code}
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n
{code}

The ConversionPattern is subtly different in the second definition. (First 
definition has extra '-' after '5p', while the second does not.)

Is this by mistake? Should I just remove the redundant definitions? Please 
advise. Thanks!

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HDFS-6263
 URL: https://issues.apache.org/jira/browse/HDFS-6263
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6263.patch


 HDFS-side of HADOOP-10525.
 {code}
 # uncomment the next line to limit number of backup files
 # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 {code}
 In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
 above lines should be removed because the appender (DRFA) doesn't support 
 MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5683) Better audit log messages for caching operations

2014-03-29 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-5683:
-

Attachment: HDFS-5683.001.patch

I addressed the above issue and made appropriate changes to FSNamesystem.java. 
Giving below some examples of the caching audit logs with the patch:

{noformat}
14/03/29 02:00:59 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=addCacheDirective   src={id: 8, path: 
/user/abutala/abhiraj, replication: 1, pool: pool7, expiration: 
73071270-05-24T21:49:13-0700} dst=nullperm=null
14/03/29 02:01:49 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=modifyCacheDirectivesrc={id: 8} 
dst={id: 8, path: /user/abutala/abhiraj/tmp2}   perm=null
14/03/29 02:03:35 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=listCacheDirectives src={}  dst=null
perm=null
14/03/29 02:03:47 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=listCacheDirectives src={pool: pool2}   
dst=nullperm=null
14/03/29 02:04:02 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=listCacheDirectives src={path: 
/user/abutala/abhiraj, pool: pool2}  dst=nullperm=null
14/03/29 02:05:54 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=removeCacheDirectivesrc={id: 8} 
dst=nullperm=null
14/03/29 02:08:44 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=addCachePool
src={poolName:pool10, ownerName:abutala, groupName:abutala, mode:0755, 
limit:9223372036854775807, maxRelativeExpiryMs:2305843009213693951}  
dst=nullperm=null
14/03/29 02:09:58 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=modifyCachePool src={poolName: 
pool10}  dst={poolName:pool10, ownerName:null, groupName:null, mode:0666, 
limit:null, maxRelativeExpiryMs:null}  perm=null
14/03/29 02:11:21 INFO FSNamesystem.audit: allowed=true ugi=abutala 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=removeCachePool src={poolName: 
pool10}  dst=nullperm=null
{noformat}


For modifyCacheDirective and modifyCachePool, I put the final changes in 'dst' 
section and the 'src' section only has the id or pool name being modified 
respectively. Also, not including any tests as this is just an update to the 
logs.  

Kindly review and let me know if there are any issues. Thank you!

 Better audit log messages for caching operations
 

 Key: HDFS-5683
 URL: https://issues.apache.org/jira/browse/HDFS-5683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
  Labels: caching
 Attachments: HDFS-5683.001.patch


 Right now the caching audit logs aren't that useful, e.g.
 {noformat}
 2013-12-18 14:14:54,423 INFO  FSNamesystem.audit 
 (FSNamesystem.java:logAuditMessage(7362)) - allowed=true ugi=andrew 
 (auth:SIMPLE)ip=/127.0.0.1   cmd=addCacheDirective   src=null
 dst=nullperm=null
 {noformat}
 It'd be good to include some more information when possible, like the path, 
 pool, id, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5683) Better audit log messages for caching operations

2014-03-29 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-5683:
-

Status: Patch Available  (was: Open)

 Better audit log messages for caching operations
 

 Key: HDFS-5683
 URL: https://issues.apache.org/jira/browse/HDFS-5683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.3.0, 3.0.0
Reporter: Andrew Wang
  Labels: caching
 Attachments: HDFS-5683.001.patch


 Right now the caching audit logs aren't that useful, e.g.
 {noformat}
 2013-12-18 14:14:54,423 INFO  FSNamesystem.audit 
 (FSNamesystem.java:logAuditMessage(7362)) - allowed=true ugi=andrew 
 (auth:SIMPLE)ip=/127.0.0.1   cmd=addCacheDirective   src=null
 dst=nullperm=null
 {noformat}
 It'd be good to include some more information when possible, like the path, 
 pool, id, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5683) Better audit log messages for caching operations

2014-03-28 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13950502#comment-13950502
 ] 

Abhiraj Butala commented on HDFS-5683:
--

Hey Andrew,

I am hoping to provide a fix for this issue and I would really appreciate if 
you can help me with a few beginner questions I have:

I tried following steps to observe the caching audit log messages in 
hdfs-audit.log, but I don't see the logs being generated:
a) Compiled and installed the latest hadoop-trunk.
b) Updated the core-site.xml as per the documentation.
c) Updated the $HADOOP_CONF_DIR/log4j.properties to direct hdfs audit logs 
to RFAAUDIT
d) Started namenode and datanode. I could see the hdfs-audit.log file being 
generated in the $HADOOP_LOG_DIR/ as expected.
e) Added a directory and a file in hdfs using 'hdfs dfs' commands.
f) Created a cache pool: 'hdfs cacheadmin -addPool pool1'
g) Added a cache directive: 'hdfs cacheadmin -addDirective -path [path 
added above] -pool pool1'

I was hoping steps e), f) and g) would log the audit messages in 
hdfs-audit.log, but I did not see any logs there. Am I missing anything? Or 
could it be that my audit logging is not setup correctly?

Thank you for your help!

 Better audit log messages for caching operations
 

 Key: HDFS-5683
 URL: https://issues.apache.org/jira/browse/HDFS-5683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
  Labels: caching

 Right now the caching audit logs aren't that useful, e.g.
 {noformat}
 2013-12-18 14:14:54,423 INFO  FSNamesystem.audit 
 (FSNamesystem.java:logAuditMessage(7362)) - allowed=true ugi=andrew 
 (auth:SIMPLE)ip=/127.0.0.1   cmd=addCacheDirective   src=null
 dst=nullperm=null
 {noformat}
 It'd be good to include some more information when possible, like the path, 
 pool, id, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5591) Checkpointing should use monotonic time when calculating period

2014-03-25 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13946264#comment-13946264
 ] 

Abhiraj Butala commented on HDFS-5591:
--

Hey Charles, your patch looks good, just that similar change should also be 
made in SecondaryNameNode.java. Thanks!

 
--- 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
 (revision 1580757)
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
 (working copy)
@@ -376,7 +376,7 @@
 if(UserGroupInformation.isSecurityEnabled())
   UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();

-long now = Time.now();
+long now = Time.monotonicNow();

 if (shouldCheckpointBasedOnCount() ||
 now = lastCheckpointTime + 1000 * checkpointConf.getPeriod()) {
= 

 Checkpointing should use monotonic time when calculating period
 ---

 Key: HDFS-5591
 URL: https://issues.apache.org/jira/browse/HDFS-5591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Andrew Wang
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5591.001.patch


 Both StandbyCheckpointer and SecondaryNameNode use {{Time.now}} rather than 
 {{Time.monotonicNow}} to calculate how long it's been since the last 
 checkpoint. This can lead to issues when the system time is changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)