Hi Harsh,
Thanks for re-routing to HBase user-group. ;-)
I followed the same steps as you, at least I tried to.
My cluster appears to be working and I outlined my client configuration below.
BTW: I knew that the hbase master authenticated to zookeeper via quorum
ensemble in order to find where hadoop dfs lives, but I didn't realize the
region servers and hbase clients also needed to authenticate to zookeeper.
Explain?
Anyway, here are the traces that I collected.
HBase master:
12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: Will send token of size 50
from initSASLContext.
12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: SASL client context
established. Negotiated QoP: auth
12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000:
readAndProcess threw exception
org.apache.hadoop.security.AccessControlException: Authentication is required.
Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
at
org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:48 WARN ipc.HBaseServer: IPC Server listener on 60000:
readAndProcess threw exception
org.apache.hadoop.security.AccessControlException: Authentication is required.
Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
at
org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:49 WARN ipc.HBaseServer: IPC Server listener on 60000:
readAndProcess threw exception
org.apache.hadoop.security.AccessControlException: Authentication is required.
Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
at
org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
zookeeper:
12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Responding to client SASL token.
12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Size of client SASL token: 67
Krb5Context.unwrap: token=[60 41 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00
ff ff ff ff 65 66 6b 58 fd f1 6b ec 27 53 22 23 5d 7b 03 33 0b e3 2d 7f d3 a9
13 62 01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d 01 ]
Krb5Context.unwrap: data=[01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e
43 4f 4d ]
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Successfully
authenticated client: [email protected];
[email protected].
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Setting authorizedID:
saspad
12/06/09 16:40:47 INFO server.ZooKeeperServer: adding SASL authorization for
authorizationID: saspad
12/06/09 16:40:50 DEBUG server.FinalRequestProcessor: Processing request::
sessionid:0x137d2f4f3350005 type:ping cxid:0xfffffffffffffffe
zxid:0xffffffffffff
It looks like my client identity, "saspad", flowed across the wire successfully.
Thanks again for taking a look at this.
-----Original Message-----
From: Harsh J [mailto:[email protected]]
Sent: Saturday, June 09, 2012 11:26 AM
To: [email protected]
Cc: Tony Dean
Subject: Re: hbase client security (cluster is secure)
Hi again Tony,
Moving this to [email protected] (bcc'd [email protected]).
Please use the right user group lists for best responses. I've added you to CC
in case you aren't subscribed to the HBase user lists.
Can you share the whole error/stacktrace-if-any/logs you get at the HMaster
that says AccessControlException? Would be helpful to see what particular
class/operation logged it to help you specifically.
I have an instance of 0.92-based cluster running after having followed
http://hbase.apache.org/book.html#zookeeper and
https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
and it seems to work well enough with auth enabled.
On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <[email protected]> wrote:
> Hi all,
>
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.
> Now a simple test is to connect an hbase client (e.g, shell) to see its
> behavior.
>
> Well, I get the following message on the hbase master:
> AccessControlException: authentication is required.
>
> Looking at the code it appears that the client passed "simple" authentication
> byte in the rpc header. Why, I don't know?
>
> My client configuration is as follows:
>
> hbase-site.xml:
> <property>
> <name>hbase.security.authentication</name>
> <value>kerberos</value>
> </property>
>
> <property>
> <name>hbase.rpc.engine</name>
> <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
> </property>
>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS
> -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>
> hbase.jaas:
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=false
> useTicketCache=true
> };
>
> I issue kinit for the client I want to use. Then invoke hbase shell. I
> simply issue list and see the error on the server.
>
> Any ideas what I am doing wrong?
>
> Thanks so much!
>
>
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: [email protected]
> Subject: hadoop file permission 1.0.3 (security)
>
>
> Can someone detail the options that are available to set file permissions at
> the hadoop and os level? Here's what I have discovered thus far:
>
> dfs.permissions = true|false (works as advertised) dfs.supergroup =
> supergroup (works as advertised) dfs.umaskmode = umask (I believe this
> should be used in lieu of dfs.umask) - it appears to set the permissions for
> files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated? what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I
> thought it was supposed to set permission on blks at the os level.
>
> Are there any other file permission configuration properties?
>
> What I would really like to do is set data blk file permissions at the os
> level so that the blocks can be locked down from all users except super and
> supergroup, but allow it to be used accessed by hadoop API as specified by
> hdfs permissions. Is this possible?
>
> Thanks.
>
>
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>
> << OLE Object: Picture (Device Independent Bitmap) >>
>
>
>
--
Harsh J