Hi all,

I have made changes to the hadoop-0.18.2 code to allow hadoop super user access 
only from some specified IP Range. If it is untrusted IP, it throws an 
exception. I would like to add it as a patch so that people can use it if 
needed in their environment. Can some one tell me what is the procedure to 
create a patch? I could see from trunk code that there is some work related to 
it happening. Especially, I am looking at Server.java code 
(PrivilegedActionException being thrown for untrusted user I believe).Can some 
one please clarify if it is written for the purpose that we are 
discussing(validating whether it is trusted super user from a specific remote 
IP)? If it is not, then I would like to add my patch.

Thanks
Pallavi

 
----- Original Message -----
From: "Ted Dunning" <[email protected]>
To: [email protected]
Sent: Friday, July 24, 2009 6:22:12 AM GMT +05:30 Chennai, Kolkata, Mumbai, New 
Delhi
Subject: Re: Remote access to cluster using user as hadoop

Interesting approach.

My guess is that this would indeed protect the datanodes from accidental
"attack" by stopping access before they are involved.

You might also consider just changing the name of the magic hadoop user to
something that is more unlikely.  The name "hadoop" is not far off what
somebody might come up with as a user name for experimenting or running
scheduled jobs.

On Thu, Jul 23, 2009 at 3:28 PM, Ian Holsman <[email protected]> wrote:

> I was thinking of alternatives similar to creating a proxy nameserver that
> non-privileged users can attach to that forwards those to the "real"
> nameserver or just hacking the nameserver so that it switches "hadoop" to
> "hadoop_remote" for sessions from untrusted IP's.
>
> not being familiar with the code, I am presuming that there is a point
> where the code determines the userID. can anyone point me to that bit?
> I just want to hack it to  downgrade superusers, and it doesn't have to be
> too clean or work for every edge case. it's more to stop accidental
> problems.
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to