Hi Tim. Great to hear your making progress.. Your on the right track but i forgot the details. But yes: you'll have to run some simple commands as user hdfs to set up permissions for "root".
You can try running your tests as user "hdfs". That is a good hammer to use since hdfs is super user on Hadoop systems that use HDFS as the file system. In other systems like gluster, we usually have root as the super user. Directory perms are always a pain in Hadoop setup. Anything you suggest to make it more user friendly maybe create a jira. On this route, we have done bigtop-1200 which now encodes all info in a json file so that any FileSystem Can use the bigtop for provisioner. I can discuss that with you also if you want later on (send me a private message). I haven't merged that to replace init-hdfs , but it is functionally equivalent , and can be found in the code base (see jiras bigtop-952 and bigtop-1200 for details). > On Sep 24, 2014, at 12:50 PM, Tim Harsch <[email protected]> wrote: > > Thanks that was helpful. So, I looked closely at the TestPigSmoke test and > tried repeating it's steps manually, which really helped. I was able to > track the issue down to a perms problem for running as user root. See this: > > [root@localhost pig]# hadoop fs -ls / > Found 6 items > drwxrwxrwx - hdfs supergroup 0 2014-09-24 00:32 /benchmarks > drwxr-xr-x - hbase hbase 0 2014-09-24 00:32 /hbase > drwxr-xr-x - solr solr 0 2014-09-24 00:32 /solr > drwxrwxrwt - hdfs supergroup 0 2014-09-24 18:33 /tmp > drwxr-xr-x - hdfs supergroup 0 2014-09-24 00:33 /user > drwxr-xr-x - hdfs supergroup 0 2014-09-24 00:32 /var > > [root@localhost pig]# hadoop fs -ls /tmp > Found 2 items > drwxrwxrwx - mapred mapred 0 2014-09-24 00:37 /tmp/hadoop-yarn > drwxr-xr-x - root supergroup 0 2014-09-24 01:29 > /tmp/temp-1450563950 > > [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn > Found 1 items > drwxrwx--- - mapred mapred 0 2014-09-24 00:37 > /tmp/hadoop-yarn/staging > > [root@localhost pig]# hadoop fs -ls /tmp/hadoop-yarn/staging > ls: Permission denied: user=root, access=READ_EXECUTE, > inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx--- > > OK, makes sense. But I'm a little confused.. I thought all the directories > would be set up correctly by the script /usr/lib/hadoop/libexec/init-hdfs.sh, > which as you can tell from the above output, I did run it. From the docs > I've read the assumption is that after running > /usr/lib/hadoop/libexec/init-hdfs.sh all tests should pass… but perhaps I > missed some instruction somewhere. > > Tim > > > > From: jay vyas <[email protected]> > Reply-To: "[email protected]" <[email protected]> > Date: Wednesday, September 24, 2014 5:46 AM > To: "[email protected]" <[email protected]> > Subject: Re: smoke tests in 0.7.0 > > Thanks tim. It could be related to permissions on the DFS... depending on > the user you are running the job as. > > Can you paste the error you got ? In general the errors should be eay to > track down in smoke-tests (you can just hack some print statements into the > groovy script under pig/). > Also, the stack trace should give you some information ?
