-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41719/#review111893
-----------------------------------------------------------

Ship it!


Ship It!

- Dmitro Lisnichenko


On Dec. 25, 2015, 4 p.m., Andrew Onischuk wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/41719/
> -----------------------------------------------------------
> 
> (Updated Dec. 25, 2015, 4 p.m.)
> 
> 
> Review request for Ambari and Dmitro Lisnichenko.
> 
> 
> Bugs: AMBARI-14497
>     https://issues.apache.org/jira/browse/AMBARI-14497
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Steps:
> 
>   1. Before deploy were created log and PID dirs with root owners, root group 
> and 000 permissions.
>   2. Deployed clusters via blueprints.
>   3. Removed cluster (reset + hostcleanup).
>   4. Were recreated log and PID dirs with root owners, root group and 000 
> permissions.
>   5. Deploy cluster via UI.
> 
> Result: NFS Gateway can not to be started:  
> This looks like hdfs issue:
> 
>     
>     
>     Execution of 'ambari-sudo.sh  -H -E 
> /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config 
> /usr/hdp/current/hadoop-client/conf start nfs3' returned 1. starting nfs3, 
> logging to /grid/0/log/hadoop/root/hadoop-hdfs-nfs3-bug49661-5.out
>     
> 
> Nfs gateway failed to start with this the following in logs:
> 
>     
>     
>     bug49661-5:~ # tail -f 
> /grid/0/log/hadoop/root/hadoop-hdfs-nfs3-bug49661-5.log
>     2015-12-16 17:28:45,064 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(377)) - Scheduled snapshot period at 60 
> second(s).
>     2015-12-16 17:28:45,064 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:start(192)) - Nfs3 metrics system started
>     2015-12-16 17:28:45,086 INFO  oncrpc.RpcProgram 
> (RpcProgram.java:<init>(84)) - Will accept client connections from 
> unprivileged ports
>     2015-12-16 17:28:45,094 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because '/etc/nfs.map' does not exist.
>     2015-12-16 17:28:45,105 INFO  nfs3.WriteManager 
> (WriteManager.java:<init>(92)) - Stream timeout is 600000ms.
>     2015-12-16 17:28:45,105 INFO  nfs3.WriteManager 
> (WriteManager.java:<init>(100)) - Maximum open streams is 256
>     2015-12-16 17:28:45,106 INFO  nfs3.OpenFileCtxCache 
> (OpenFileCtxCache.java:<init>(54)) - Maximum open streams is 256
>     2015-12-16 17:28:45,338 INFO  nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:<init>(205)) - Configured HDFS superuser is 
>     2015-12-16 17:28:45,339 INFO  nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory 
> /tmp/.hdfs-nfs
>     2015-12-16 17:28:45,343 WARN  fs.FileUtil (FileUtil.java:deleteImpl(187)) 
> - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.
>     
> 
> I checked and seems like /tmp/.hdfs-nfs was an existing folder, which I could
> delete without problems (as root - since we run nfs gateway as root). Seems
> like nfs issue, while it's no able to delete that folder for some reason.
> 
>     
>     
>     drwxr-xr-x 2      2815 hadoop        4096 Dec 16 13:40 .hdfs-nfs
> 
> 
> Diffs
> -----
> 
>   
> ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_nfsgateway.py
>  d874b2e 
>   
> ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py
>  870f627 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_nfsgateway.py 6396c1e 
>   ambari-server/src/test/python/stacks/2.0.6/configs/default.json bc40657 
>   ambari-server/src/test/python/stacks/2.0.6/configs/secured.json 9533473 
> 
> Diff: https://reviews.apache.org/r/41719/diff/
> 
> 
> Testing
> -------
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>

Reply via email to