[
https://issues.apache.org/jira/browse/AMBARI-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15071653#comment-15071653
]
Hudson commented on AMBARI-14497:
---------------------------------
SUCCESS: Integrated in Ambari-trunk-Commit #4093 (See
[https://builds.apache.org/job/Ambari-trunk-Commit/4093/])
AMBARI-14497. NFS Gateway fails to start with /tmp/.hdfs-nfs warning in
(aonishuk:
[http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=bc841dce6305fd3b4a018c79dfbf258cd9910a12])
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_nfsgateway.py
* ambari-server/src/test/python/stacks/2.0.6/configs/default.json
* ambari-server/src/test/python/stacks/2.0.6/configs/secured.json
*
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_nfsgateway.py
*
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py
> NFS Gateway fails to start with /tmp/.hdfs-nfs warning in logs
> --------------------------------------------------------------
>
> Key: AMBARI-14497
> URL: https://issues.apache.org/jira/browse/AMBARI-14497
> Project: Ambari
> Issue Type: Bug
> Reporter: Andrew Onischuk
> Assignee: Andrew Onischuk
> Fix For: 2.4.0
>
> Attachments: AMBARI-14497.patch
>
>
> Steps:
> 1. Before deploy were created log and PID dirs with root owners, root group
> and 000 permissions.
> 2. Deployed clusters via blueprints.
> 3. Removed cluster (reset + hostcleanup).
> 4. Were recreated log and PID dirs with root owners, root group and 000
> permissions.
> 5. Deploy cluster via UI.
> Result: NFS Gateway can not to be started:
> This looks like hdfs issue:
>
>
> Execution of 'ambari-sudo.sh -H -E
> /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config
> /usr/hdp/current/hadoop-client/conf start nfs3' returned 1. starting nfs3,
> logging to /grid/0/log/hadoop/root/hadoop-hdfs-nfs3-bug49661-5.out
>
> Nfs gateway failed to start with this the following in logs:
>
>
> bug49661-5:~ # tail -f
> /grid/0/log/hadoop/root/hadoop-hdfs-nfs3-bug49661-5.log
> 2015-12-16 17:28:45,064 INFO impl.MetricsSystemImpl
> (MetricsSystemImpl.java:startTimer(377)) - Scheduled snapshot period at 60
> second(s).
> 2015-12-16 17:28:45,064 INFO impl.MetricsSystemImpl
> (MetricsSystemImpl.java:start(192)) - Nfs3 metrics system started
> 2015-12-16 17:28:45,086 INFO oncrpc.RpcProgram
> (RpcProgram.java:<init>(84)) - Will accept client connections from
> unprivileged ports
> 2015-12-16 17:28:45,094 INFO security.ShellBasedIdMapping
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static
> UID/GID mapping because '/etc/nfs.map' does not exist.
> 2015-12-16 17:28:45,105 INFO nfs3.WriteManager
> (WriteManager.java:<init>(92)) - Stream timeout is 600000ms.
> 2015-12-16 17:28:45,105 INFO nfs3.WriteManager
> (WriteManager.java:<init>(100)) - Maximum open streams is 256
> 2015-12-16 17:28:45,106 INFO nfs3.OpenFileCtxCache
> (OpenFileCtxCache.java:<init>(54)) - Maximum open streams is 256
> 2015-12-16 17:28:45,338 INFO nfs3.RpcProgramNfs3
> (RpcProgramNfs3.java:<init>(205)) - Configured HDFS superuser is
> 2015-12-16 17:28:45,339 INFO nfs3.RpcProgramNfs3
> (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory
> /tmp/.hdfs-nfs
> 2015-12-16 17:28:45,343 WARN fs.FileUtil (FileUtil.java:deleteImpl(187))
> - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.
>
> I checked and seems like /tmp/.hdfs-nfs was an existing folder, which I could
> delete without problems (as root - since we run nfs gateway as root). Seems
> like nfs issue, while it's no able to delete that folder for some reason.
>
>
> drwxr-xr-x 2 2815 hadoop 4096 Dec 16 13:40 .hdfs-nfs
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)