[ 
https://issues.apache.org/jira/browse/HADOOP-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292073#comment-13292073
 ] 

Colin Patrick McCabe commented on HADOOP-8499:
----------------------------------------------

{code}
cmccabe@vm0:~$ cat /etc/issue
Ubuntu 12.04 LTS \n \l
{code}

{code}
cmccabe@vm0:~$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
backup:x:34:34:backup:/var/backups:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
libuuid:x:100:101::/var/lib/libuuid:/bin/sh
syslog:x:101:103::/home/syslog:/bin/false
messagebus:x:102:105::/var/run/dbus:/bin/false
whoopsie:x:103:106::/nonexistent:/bin/false
landscape:x:104:109::/var/lib/landscape:/bin/false
sshd:x:105:65534::/var/run/sshd:/usr/sbin/nologin
cmccabe:x:1000:1000:cmccabe,,,:/home/cmccabe:/bin/bash
{code}

Notice that there are no user IDs between 500 and 1000 whatsoever.

Also notice that nobody is user id 65534, so we'll happily allow people to run 
mapreduce jobs as it (apparently).

This whole "check the numberic value of the user id" thing stinks.  I think we 
should either be validating against a whitelist, or checking if the given user 
has a valid login shell.
                
> fix mvn compile -Pnative on CentOS / RHEL / Fedora / SuSE / etc
> ---------------------------------------------------------------
>
>                 Key: HADOOP-8499
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8499
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HADOOP-8499.002.patch
>
>
> On Linux platforms where user IDs start at 500 rather than 1000, the build 
> currently is broken.  This includes CentOS, RHEL, Fedora, SuSE, and probably 
> most other Linux platforms.  It does happen to work on Debian and Ubuntu, 
> which explains why Jenkins hasn't caught it yet.
> Other users will see something like this:
> {code}
> [INFO] Requested user cmccabe has id 500, which is below the minimum allowed 
> 1000
> [INFO] FAIL: test-container-executor
> [INFO] ================================================
> [INFO] 1 of 1 test failed
> [INFO] Please report to [email protected]
> [INFO] ================================================
> [INFO] make[1]: *** [check-TESTS] Error 1
> [INFO] make[1]: Leaving directory 
> `/home/cmccabe/hadoop4/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn
> -server/hadoop-yarn-server-nodemanager/target/native/container-executor'
> {code}
> And then the build fails.  Since native unit tests are currently unskippable 
> (HADOOP-8480) this makes the project unbuildable.
> The easy solution to this is to relax the constraint for the unit test.  
> Since the unit test already writes its own configuration file, we just need 
> to change it there.
> In general, I believe that it would make sense to change this to 500 across 
> the board.  I'm not aware of any Linuxes that create system users with IDs 
> higher than or equal to 500.  System user IDs tend to be below 200.
> However, if we do nothing else, we should at least fix the build by relaxing 
> the constraint for unit tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to