Hi again

early on in my attempts to Kerberise our Hadoop instance, I had seen an
error message that suggested I needed to add a list of users who could
run jobs into the last line of Hadoop's

container-executor.cfg

for which the default content is

yarn.nodemanager.linux-container-executor.group=#configured value of
yarn.nodemanager.linux-container-executor.group
banned.users=#comma separated list of users who can not run applications
min.user.id=1000#Prevent other super-users
allowed.system.users=##comma separated list of system users who CAN
run applications


and after I had dropped the min.user.id to allow for the yarn user in
our systems to run jobs AND added a list of users higher than that,
those other users were able to run jobs.

I now came to test out removing a user from the "allowed" list and I
can't seem to prevent that user from running MapReduce jobs, no
matter which of the various daemons I stop and start, including
shutting down and restarting the whole thing.

Should I be reading that

allowed.system.users=

list to be a list of UIDs from BELOW the

min.user.id=

list, rather than an actual "only allow users in the list" to run jobs list ?

Clealry, one can't run jobs if one doesn't have access to directories
to put data into, so that's a kind of "job control" ACL of itself but I
was hoping that the underlying HDFS might contain a wider set of
users than those allowed to run jobs at any given time, in which case,
altering the ability via the

container-executor.cfg

list seemed a simple way to achieve that.

Any clues/insight welcome,
Kevin

---
Kevin M. Buckley

eScience Consultant
School of Engineering and Computer Science
Victoria University of Wellington
New Zealand

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org

Reply via email to