RE: Connecting Hadoop HA cluster via java client

2016-10-11 Thread 권병창
Hi.
 
1. minimal configuration to connect HA namenode is below properties.
zookeeper information does not necessary.
 
dfs.nameservices
dfs.ha.namenodes.${dfs.nameservices}
dfs.namenode.rpc-address.${dfs.nameservices}.nn1 
dfs.namenode.rpc-address.${dfs.nameservices}.nn2
dfs.namenode.http-address.${dfs.nameservices}.nn1 
dfs.namenode.http-address.${dfs.nameservices}.nn2
dfs.client.failover.proxy.provider.c3=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 
 
2. client use round robin manner for selecting active namenode.
 
 
-Original Message-
From: "Pushparaj Motamari"pushpara...@gmail.com 
To: user@hadoop.apache.org; 
Cc: 
Sent: 2016-10-12 (수) 03:20:53
Subject: Connecting Hadoop HA cluster via java client
 
Hi,
 I have two questions pertaining to accessing the hadoop ha cluster from java 
client.  1. Is  it necessary to supply 
conf.set("dfs.ha.automatic-failover.enabled",true);
and
conf.set("ha.zookeeper.quorum","zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181");

in addition to the other properties set in the code below?
private Configuration initHAConf(URI journalURI, Configuration conf) {
  conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY,
  journalURI.toString());
  
  String address1 = "127.0.0.1:" + NN1_IPC_PORT;
  String address2 = "127.0.0.1:" + NN2_IPC_PORT;
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN1), address1);
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN2), address2);
  conf.set(DFSConfigKeys.DFS_NAMESERVICES, NAMESERVICE);
  conf.set(DFSUtil.addKeySuffixes(DFS_HA_NAMENODES_KEY_PREFIX, NAMESERVICE),
  NN1 + "," + NN2);
  conf.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + NAMESERVICE,
  ConfiguredFailoverProxyProvider.class.getName());
  conf.set("fs.defaultFS", "hdfs://" + NAMESERVICE);
  
  return conf;
}

2. If we supply zookeeper configuration details as mentioned in the question 1 
is it necessary to set the primary and secondary namenode addresses as 
mentioned in the code above? Since we have 
given zookeeper connection details the client should be able to figure out the 
active namenode connection details.


Regards

Pushparaj





Connecting Hadoop HA cluster via java client

2016-10-11 Thread Pushparaj Motamari
Hi,

I have two questions pertaining to accessing the hadoop ha cluster from
java client.

1. Is  it necessary to supply

conf.set("dfs.ha.automatic-failover.enabled",true);

and

conf.set("ha.zookeeper.quorum","zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181");

in addition to the other properties set in the code below?

private Configuration initHAConf(URI journalURI, Configuration conf) {
  conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY,
  journalURI.toString());

  String address1 = "127.0.0.1:" + NN1_IPC_PORT;
  String address2 = "127.0.0.1:" + NN2_IPC_PORT;
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN1), address1);
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN2), address2);
  conf.set(DFSConfigKeys.DFS_NAMESERVICES, NAMESERVICE);
  conf.set(DFSUtil.addKeySuffixes(DFS_HA_NAMENODES_KEY_PREFIX, NAMESERVICE),
  NN1 + "," + NN2);
  conf.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + NAMESERVICE,
  ConfiguredFailoverProxyProvider.class.getName());
  conf.set("fs.defaultFS", "hdfs://" + NAMESERVICE);

  return conf;}

2. If we supply zookeeper configuration details as mentioned in the
question 1 is it necessary to set the primary and secondary namenode
addresses as mentioned in the code above? Since we have
given zookeeper connection details the client should be able to figure
out the active namenode connection details.


Regards

Pushparaj


RE: Authentication Failure talking to Ranger KMS

2016-10-11 Thread Benjamin Ross
Just for kicks I tried applying the patch in that ticket and it didn't have any 
effect.  It makes sense because my issue is on CREATE, and the ticket only has 
to do with OPEN.

Note that I don't have these issues using WebHDFS, only using httpfs, so it 
definitely seems like we're on the right track...

Thanks in advance,
Ben




From: Benjamin Ross
Sent: Tuesday, October 11, 2016 12:02 PM
To: Wei-Chiu Chuang
Cc: user@hadoop.apache.org; u...@ranger.incubator.apache.org
Subject: RE: Authentication Failure talking to Ranger KMS

That seems promising.  But shouldn't I be able to work around it by just 
ensuring that httpfs has all necessary privileges in the KMS service under 
Ranger?

Thanks,
Ben



From: Wei-Chiu Chuang [weic...@cloudera.com]
Sent: Tuesday, October 11, 2016 11:57 AM
To: Benjamin Ross
Cc: user@hadoop.apache.org; u...@ranger.incubator.apache.org
Subject: Re: Authentication Failure talking to Ranger KMS

Somes to me you encountered this bug? 
HDFS-10481
If you’re using CDH, this is fixed in CDH5.5.5, CDH5.7.2 and CDH5.8.2

Wei-Chiu Chuang
A very happy Clouderan

On Oct 11, 2016, at 8:38 AM, Benjamin Ross 
> wrote:

All,
I'm trying to use httpfs to write to an encryption zone with security off.  I 
can read from an encryption zone, but I can't write to one.

Here's the applicable namenode logs.  httpfs and root both have all possible 
privileges in the KMS.  What am I missing?


2016-10-07 15:48:16,164 DEBUG ipc.Server 
(Server.java:authorizeConnection(2095)) - Successfully authorized userInfo {
  effectiveUser: "root"
  realUser: "httpfs"
}
protocol: "org.apache.hadoop.hdfs.protocol.ClientProtocol"

2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:processOneRpc(1902)) -  
got #2
2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:run(2179)) - IPC Server 
handler 9 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
10.41.1.64:47622 Call#2 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-07 15:48:16,165 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:root (auth:PROXY) via httpfs (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2205)
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(NameNodeRpcServer.java:create(699)) - *DIR* NameNode.create: file 
/tmp/cryptotest/hairyballs for DFSClient_NONMAPREDUCE_-1005188439_28 at 
10.41.1.64
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(FSNamesystem.java:startFileInt(2411)) - DIR* NameSystem.startFile: 
src=/tmp/cryptotest/hairyballs, holder=DFSClient_NONMAPREDUCE_-1005188439_28, 
clientMachine=10.41.1.64, createParent=true, replication=3, createFlag=[CREATE
, OVERWRITE], blockSize=134217728, 
supportedVersions=[CryptoProtocolVersion{description='Encryption zones', 
version=2, unknownValue=null}]
2016-10-07 15:48:16,167 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:hdfs (auth:SIMPLE) 
from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:484)
2016-10-07 15:48:16,171 DEBUG client.KerberosAuthenticator 
(KerberosAuthenticator.java:authenticate(205)) - Using fallback authenticator 
sequence.
2016-10-07 15:48:16,176 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:hdfs 
(auth:SIMPLE) 
cause:org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, messag
e: Forbidden
2016-10-07 15:48:16,176 DEBUG ipc.Server (ProtobufRpcEngine.java:call(631)) - 
Served: create queueTime= 2 procesingTime= 10 exception= IOException
2016-10-07 15:48:16,177 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:root 
(auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: org.apach
e.hadoop.security.authentication.client.AuthenticationException: Authentication 
failed, status: 403, message: Forbidden
2016-10-07 15:48:16,177 INFO  ipc.Server (Server.java:logException(2299)) - IPC 
Server handler 9 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 10.41.1.64:47622 
Call#2 Retry#0
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, message: Forbidden
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:750)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
at 

RE: how to add a shareable node label?

2016-10-11 Thread Frank Luo
Hah, how so? I am confused as I was under impression that I needed sharing but 
not preemption.

Let’s model this out.

Assuming I got 4 “normal” machines node1-4, and two special node8 and node9 
where JobA can be executed on.

And I need two queues, ProdQ and TestQ equally sharing Node1-4, and a 
“LabeledQ” with node8/9.

When ProdQ is full, it can overflow to TestQ and further to LabeledQ. If TestQ 
is full, the tasks stay in TestQ, or optionally overflow to LabeledQ (either 
way is fine as long as it doesn’t go to ProdQ). And when JobA is running, it 
can only go to LabelledQ. If something else is on LabelledQ, JobA waits.

Do you mind to illustrate how to config the queues to achieve what I am looking 
for?

Thank you Sunil.

From: Sunil Govind [mailto:sunil.gov...@gmail.com]
Sent: Tuesday, October 11, 2016 11:44 AM
To: Frank Luo ; user@hadoop.apache.org
Subject: Re: how to add a shareable node label?

Hi Frank

Extremely sorry for the delay..

Yes, you are correct. Sharing feature of node label is not needed in your case.
Existing node labels and a queue model could solve the problem.

Thanks
Sunil

On Fri, Oct 7, 2016 at 11:59 PM Frank Luo 
> wrote:
That is correct, Sunil.

Just to confirm,  the Node Labeling feature on 2.8 or 3.0 alpha won’t satisfy 
my need, right?

From: Sunil Govind 
[mailto:sunil.gov...@gmail.com]
Sent: Friday, October 07, 2016 12:09 PM

To: Frank Luo >; 
user@hadoop.apache.org
Subject: Re: how to add a shareable node label?

HI Frank

In that case, preemption may not be needed. So over-utilizing resources of 
queueB will be running till it completes. Since queueA is under served, then 
any next free container could go to queueA which is for Job_A.

Thanks
Sunil

On Fri, Oct 7, 2016 at 9:58 PM Frank Luo 
> wrote:
Sunil,

Your description pretty much matches my understanding. Except for “Job_A will 
have to run as per its schedule w/o any delay”. My situation is that Job_A can 
be delayed. As long as it runs in queueA, I am happy.

Just as you said, processes normally running in queueB might not be 
preemptable. So if they overflow to queueA then got preempted, then that is not 
good.

From: Sunil Govind 
[mailto:sunil.gov...@gmail.com]
Sent: Friday, October 07, 2016 10:50 AM

To: Frank Luo >; 
user@hadoop.apache.org
Subject: Re: how to add a shareable node label?

HI Frank

Thanks for the details.

I am not quite sure if I understood you problem correctly. I think you are 
looking for a solution to ensure that Job_A will have to run as per its 
schedule w/o any delay. Meantime you also do not want to waste resources on 
those high end machine where Job_A is running.

I think you still need node label exclusivity here since there is h/w 
dependency. But if you have 2 queues' which are shared to use "labelA" here, 
then always "Job_A" can be planned to run in that queue, say "queueA". Other 
jobs could be run in "queueB" here. So if you tune capacities and if preemption 
is enabled per queue level, overutilized resources used by "queueB" could be 
preempted for "Job_A".

But if your sharable jobs are like some linux jobs which should not be 
preempted, then this may be only a half solution.

Thanks
Sunil

On Fri, Oct 7, 2016 at 7:36 AM Frank Luo 
> wrote:
Sunil,

You confirmed my understanding. I got the understanding by reading the docs and 
haven’t really tried 2.8 or 3.0-alphal.

My situation is that I am in a multi-tenant env, and  got several very powerful 
machines with expensive licenses to run a particular linux job, let’s say 
Job_A. But the job is executed infrequently, so I want to let other jobs to use 
the machines when Job_A is not running. In the meaning time, I am not powerful 
enough to force all other jobs to be preemptable. As matter of fact, I know 
they have Hadoop jobs inserting into sql-server, or just pure linux jobs that 
are not preemptable in nature. So preempt jobs is not an option for me.

I hope it makes sense.

Frank

From: Sunil Govind 
[mailto:sunil.gov...@gmail.com]
Sent: Thursday, October 06, 2016 2:15 PM

To: Frank Luo >; 
user@hadoop.apache.org
Subject: Re: how to add a shareable node label?

HI Frank

Ideally those containers will be preempted if there are unsatisfied demand for 
"configured label".

I could explain this:
"labelA" has few empty resources.  All nodes under "default" label is used. 
Hence a new application which is submitted to "default" label has to wait. But 
if "labelA" is non-exclusive and there are some free resources, this new 
application can run 

Re: how to add a shareable node label?

2016-10-11 Thread Sunil Govind
Hi Frank

Extremely sorry for the delay..

Yes, you are correct. Sharing feature of node label is not needed in your
case.
Existing node labels and a queue model could solve the problem.

Thanks
Sunil

On Fri, Oct 7, 2016 at 11:59 PM Frank Luo  wrote:

> That is correct, Sunil.
>
>
>
> Just to confirm,  the Node Labeling feature on 2.8 or 3.0 alpha won’t
> satisfy my need, right?
>
>
>
> *From:* Sunil Govind [mailto:sunil.gov...@gmail.com]
> *Sent:* Friday, October 07, 2016 12:09 PM
>
>
> *To:* Frank Luo ; user@hadoop.apache.org
> *Subject:* Re: how to add a shareable node label?
>
>
>
> HI Frank
>
>
>
> In that case, preemption may not be needed. So over-utilizing resources of
> queueB will be running till it completes. Since queueA is under served,
> then any next free container could go to queueA which is for Job_A.
>
>
>
> Thanks
>
> Sunil
>
>
>
> On Fri, Oct 7, 2016 at 9:58 PM Frank Luo  wrote:
>
> Sunil,
>
>
>
> Your description pretty much matches my understanding. Except for “Job_A
> will have to run as per its schedule w/o any delay”. My situation is that
> Job_A can be delayed. As long as it runs in queueA, I am happy.
>
>
>
> Just as you said, processes normally running in queueB might not be
> preemptable. So if they overflow to queueA then got preempted, then that is
> not good.
>
>
>
> *From:* Sunil Govind [mailto:sunil.gov...@gmail.com]
> *Sent:* Friday, October 07, 2016 10:50 AM
>
>
> *To:* Frank Luo ; user@hadoop.apache.org
>
> *Subject:* Re: how to add a shareable node label?
>
>
>
> HI Frank
>
>
>
> Thanks for the details.
>
>
>
> I am not quite sure if I understood you problem correctly. I think you are
> looking for a solution to ensure that Job_A will have to run as per its
> schedule w/o any delay. Meantime you also do not want to waste resources on
> those high end machine where Job_A is running.
>
>
>
> I think you still need node label exclusivity here since there is h/w
> dependency. But if you have 2 queues' which are shared to use "labelA"
> here, then always "Job_A" can be planned to run in that queue, say
> "queueA". Other jobs could be run in "queueB" here. So if you tune
> capacities and if preemption is enabled per queue level, overutilized
> resources used by "queueB" could be preempted for "Job_A".
>
>
>
> But if your sharable jobs are like some linux jobs which should not be
> preempted, then this may be only a half solution.
>
>
>
> Thanks
>
> Sunil
>
>
>
> On Fri, Oct 7, 2016 at 7:36 AM Frank Luo  wrote:
>
> Sunil,
>
>
>
> You confirmed my understanding. I got the understanding by reading the
> docs and haven’t really tried 2.8 or 3.0-alphal.
>
>
>
> My situation is that I am in a multi-tenant env, and  got several very
> powerful machines with expensive licenses to run a particular linux job,
> let’s say Job_A. But the job is executed infrequently, so I want to let
> other jobs to use the machines when Job_A is not running. In the meaning
> time, I am not powerful enough to force all other jobs to be preemptable.
> As matter of fact, I know they have Hadoop jobs inserting into sql-server,
> or just pure linux jobs that are not preemptable in nature. So preempt jobs
> is not an option for me.
>
>
>
> I hope it makes sense.
>
>
>
> Frank
>
>
>
> *From:* Sunil Govind [mailto:sunil.gov...@gmail.com]
> *Sent:* Thursday, October 06, 2016 2:15 PM
>
>
> *To:* Frank Luo ; user@hadoop.apache.org
> *Subject:* Re: how to add a shareable node label?
>
>
>
> HI Frank
>
>
>
> Ideally those containers will be preempted if there are unsatisfied demand
> for "configured label".
>
>
>
> I could explain this:
>
> "labelA" has few empty resources.  All nodes under "default" label is
> used. Hence a new application which is submitted to "default" label has to
> wait. But if "labelA" is non-exclusive and there are some free resources,
> this new application can run on "labelA".
>
> However if there are some more new apps submitted to "labelA", and if
> there are no more resources available in "labelA", then it may preempt
> containers from the app which was sharing containers earlier.
>
>
>
> May be you could share some more information so tht it may become more
> clear. Also I suppose you are running this in hadoop 3 alpha1 release.
> please correct me if I m wrong.
>
>
>
> Thanks
>
> Sunil
>
>
>
> On Thu, Oct 6, 2016 at 9:44 PM Frank Luo  wrote:
>
> Thanks Sunil.
>
>
>
> Ø  3. If there is any future ask for those resources , we will preempt
> the non labeled apps and give them back to labeled apps.
>
>
>
> Unfortunately, I am still not able to use it, because of the preemptive
> behavior. The jobs that steals labelled resources are not preemptable, and
> I’d rather waiting instead of killing.
>
>
>
> *From:* Sunil Govind [mailto:sunil.gov...@gmail.com]
> *Sent:* Thursday, October 06, 2016 1:59 AM
>
>
> *To:* Frank Luo ; 

RE: Authentication Failure talking to Ranger KMS

2016-10-11 Thread Benjamin Ross
That seems promising.  But shouldn't I be able to work around it by just 
ensuring that httpfs has all necessary privileges in the KMS service under 
Ranger?

Thanks,
Ben



From: Wei-Chiu Chuang [weic...@cloudera.com]
Sent: Tuesday, October 11, 2016 11:57 AM
To: Benjamin Ross
Cc: user@hadoop.apache.org; u...@ranger.incubator.apache.org
Subject: Re: Authentication Failure talking to Ranger KMS

Somes to me you encountered this bug? 
HDFS-10481
If you’re using CDH, this is fixed in CDH5.5.5, CDH5.7.2 and CDH5.8.2

Wei-Chiu Chuang
A very happy Clouderan

On Oct 11, 2016, at 8:38 AM, Benjamin Ross 
> wrote:

All,
I'm trying to use httpfs to write to an encryption zone with security off.  I 
can read from an encryption zone, but I can't write to one.

Here's the applicable namenode logs.  httpfs and root both have all possible 
privileges in the KMS.  What am I missing?


2016-10-07 15:48:16,164 DEBUG ipc.Server 
(Server.java:authorizeConnection(2095)) - Successfully authorized userInfo {
  effectiveUser: "root"
  realUser: "httpfs"
}
protocol: "org.apache.hadoop.hdfs.protocol.ClientProtocol"

2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:processOneRpc(1902)) -  
got #2
2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:run(2179)) - IPC Server 
handler 9 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
10.41.1.64:47622 Call#2 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-07 15:48:16,165 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:root (auth:PROXY) via httpfs (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2205)
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(NameNodeRpcServer.java:create(699)) - *DIR* NameNode.create: file 
/tmp/cryptotest/hairyballs for DFSClient_NONMAPREDUCE_-1005188439_28 at 
10.41.1.64
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(FSNamesystem.java:startFileInt(2411)) - DIR* NameSystem.startFile: 
src=/tmp/cryptotest/hairyballs, holder=DFSClient_NONMAPREDUCE_-1005188439_28, 
clientMachine=10.41.1.64, createParent=true, replication=3, createFlag=[CREATE
, OVERWRITE], blockSize=134217728, 
supportedVersions=[CryptoProtocolVersion{description='Encryption zones', 
version=2, unknownValue=null}]
2016-10-07 15:48:16,167 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:hdfs (auth:SIMPLE) 
from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:484)
2016-10-07 15:48:16,171 DEBUG client.KerberosAuthenticator 
(KerberosAuthenticator.java:authenticate(205)) - Using fallback authenticator 
sequence.
2016-10-07 15:48:16,176 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:hdfs 
(auth:SIMPLE) 
cause:org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, messag
e: Forbidden
2016-10-07 15:48:16,176 DEBUG ipc.Server (ProtobufRpcEngine.java:call(631)) - 
Served: create queueTime= 2 procesingTime= 10 exception= IOException
2016-10-07 15:48:16,177 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:root 
(auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: org.apach
e.hadoop.security.authentication.client.AuthenticationException: Authentication 
failed, status: 403, message: Forbidden
2016-10-07 15:48:16,177 INFO  ipc.Server (Server.java:logException(2299)) - IPC 
Server handler 9 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 10.41.1.64:47622 
Call#2 Retry#0
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, message: Forbidden
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:750)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2352)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2478)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2377)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:716)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
at 

Re: Authentication Failure talking to Ranger KMS

2016-10-11 Thread Wei-Chiu Chuang
Somes to me you encountered this bug? HDFS-10481 

If you’re using CDH, this is fixed in CDH5.5.5, CDH5.7.2 and CDH5.8.2

Wei-Chiu Chuang
A very happy Clouderan

> On Oct 11, 2016, at 8:38 AM, Benjamin Ross  wrote:
> 
> All,
> I'm trying to use httpfs to write to an encryption zone with security off.  I 
> can read from an encryption zone, but I can't write to one.
> 
> Here's the applicable namenode logs.  httpfs and root both have all possible 
> privileges in the KMS.  What am I missing?
> 
> 
> 2016-10-07 15:48:16,164 DEBUG ipc.Server 
> (Server.java:authorizeConnection(2095)) - Successfully authorized userInfo {
>   effectiveUser: "root"
>   realUser: "httpfs"
> }
> protocol: "org.apache.hadoop.hdfs.protocol.ClientProtocol"
> 
> 2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:processOneRpc(1902)) -  
> got #2
> 2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:run(2179)) - IPC Server 
> handler 9 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.41.1.64:47622 Call#2 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
> 2016-10-07 15:48:16,165 DEBUG security.UserGroupInformation 
> (UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
> as:root (auth:PROXY) via httpfs (auth:SIMPLE) 
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2205)
> 2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
> (NameNodeRpcServer.java:create(699)) - *DIR* NameNode.create: file 
> /tmp/cryptotest/hairyballs for DFSClient_NONMAPREDUCE_-1005188439_28 at 
> 10.41.1.64
> 2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
> (FSNamesystem.java:startFileInt(2411)) - DIR* NameSystem.startFile: 
> src=/tmp/cryptotest/hairyballs, holder=DFSClient_NONMAPREDUCE_-1005188439_28, 
> clientMachine=10.41.1.64, createParent=true, replication=3, createFlag=[CREATE
> , OVERWRITE], blockSize=134217728, 
> supportedVersions=[CryptoProtocolVersion{description='Encryption zones', 
> version=2, unknownValue=null}]
> 2016-10-07 15:48:16,167 DEBUG security.UserGroupInformation 
> (UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
> as:hdfs (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:484)
> 2016-10-07 15:48:16,171 DEBUG client.KerberosAuthenticator 
> (KerberosAuthenticator.java:authenticate(205)) - Using fallback authenticator 
> sequence.
> 2016-10-07 15:48:16,176 DEBUG security.UserGroupInformation 
> (UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:hdfs 
> (auth:SIMPLE) 
> cause:org.apache.hadoop.security.authentication.client.AuthenticationException:
>  Authentication failed, status: 403, messag
> e: Forbidden
> 2016-10-07 15:48:16,176 DEBUG ipc.Server (ProtobufRpcEngine.java:call(631)) - 
> Served: create queueTime= 2 procesingTime= 10 exception= IOException
> 2016-10-07 15:48:16,177 DEBUG security.UserGroupInformation 
> (UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:root 
> (auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: 
> java.util.concurrent.ExecutionException: java.io.IOException: org.apach
> e.hadoop.security.authentication.client.AuthenticationException: 
> Authentication failed, status: 403, message: Forbidden
> 2016-10-07 15:48:16,177 INFO  ipc.Server (Server.java:logException(2299)) - 
> IPC Server handler 9 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 10.41.1.64:47622 
> Call#2 Retry#0
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Authentication failed, status: 403, message: Forbidden
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:750)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2352)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2478)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2377)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:716)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at 

Authentication Failure talking to Ranger KMS

2016-10-11 Thread Benjamin Ross
All,
I'm trying to use httpfs to write to an encryption zone with security off.  I 
can read from an encryption zone, but I can't write to one.

Here's the applicable namenode logs.  httpfs and root both have all possible 
privileges in the KMS.  What am I missing?


2016-10-07 15:48:16,164 DEBUG ipc.Server 
(Server.java:authorizeConnection(2095)) - Successfully authorized userInfo {
  effectiveUser: "root"
  realUser: "httpfs"
}
protocol: "org.apache.hadoop.hdfs.protocol.ClientProtocol"

2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:processOneRpc(1902)) -  
got #2
2016-10-07 15:48:16,164 DEBUG ipc.Server (Server.java:run(2179)) - IPC Server 
handler 9 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
10.41.1.64:47622 Call#2 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-07 15:48:16,165 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:root (auth:PROXY) via httpfs (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2205)
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(NameNodeRpcServer.java:create(699)) - *DIR* NameNode.create: file 
/tmp/cryptotest/hairyballs for DFSClient_NONMAPREDUCE_-1005188439_28 at 
10.41.1.64
2016-10-07 15:48:16,166 DEBUG hdfs.StateChange 
(FSNamesystem.java:startFileInt(2411)) - DIR* NameSystem.startFile: 
src=/tmp/cryptotest/hairyballs, holder=DFSClient_NONMAPREDUCE_-1005188439_28, 
clientMachine=10.41.1.64, createParent=true, replication=3, createFlag=[CREATE
, OVERWRITE], blockSize=134217728, 
supportedVersions=[CryptoProtocolVersion{description='Encryption zones', 
version=2, unknownValue=null}]
2016-10-07 15:48:16,167 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1751)) - PrivilegedAction 
as:hdfs (auth:SIMPLE) 
from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:484)
2016-10-07 15:48:16,171 DEBUG client.KerberosAuthenticator 
(KerberosAuthenticator.java:authenticate(205)) - Using fallback authenticator 
sequence.
2016-10-07 15:48:16,176 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:hdfs 
(auth:SIMPLE) 
cause:org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, messag
e: Forbidden
2016-10-07 15:48:16,176 DEBUG ipc.Server (ProtobufRpcEngine.java:call(631)) - 
Served: create queueTime= 2 procesingTime= 10 exception= IOException
2016-10-07 15:48:16,177 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:doAs(1728)) - PrivilegedActionException as:root 
(auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: org.apach
e.hadoop.security.authentication.client.AuthenticationException: Authentication 
failed, status: 403, message: Forbidden
2016-10-07 15:48:16,177 INFO  ipc.Server (Server.java:logException(2299)) - IPC 
Server handler 9 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 10.41.1.64:47622 
Call#2 Retry#0
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, message: Forbidden
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:750)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2352)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2478)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2377)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:716)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2211)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2207)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2205)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: