Re: CredentialProvider API

2019-04-24 Thread larry mccay
This is likely an issue only for issues where we need the password from
HDFS in order to access HDFS.
This should definitely be avoided by not having a static credential
provider path configured for startup that includes such a dependency.

For instance, the JIRA you cite is an example where we need to do group
lookup in order to determine whether you are allowed to access the HDFS
resource that provides the password required to do group lookup.

Storing passwords in credential stores within HDFS should be perfectly safe
for things like SSL that don't have a dependency on HDFS itself.

Those details are in the documentation page that you referenced but if they
need to be made more clear that completely makes sense.

On Wed, Apr 24, 2019 at 9:56 PM Karthik P  wrote:

> Team,
>
> Datanode is failed to restart after configuring credentials provider,
> storing credential into HDFS (jceks://hdfs@hostname
> :9001/credential/keys.jceks).
>
> Getting a StackOverFlow error in datanode jsvc.out file similar to
> HADOOP-11934 .
>
> As per the documentation link
> <
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Supported_Features
> >,
> we support storing credential in HDFS.
>
> *URI jceks://file|hdfs/path-to-keystore, is used to retrieve credentials
> from a Java keystore. The underlying use of the Hadoop filesystem
> abstraction allows credentials to be stored on the local filesystem or
> within HDFS.*
>
> Assume a scenario, where all of our data nodes were down and we configured
> hadoop.security.credential.provider.path to HDFS location. So when we try
> to get FileSystem.get() during datanode restart we end up doing recursive
> call if HDFS is inaccessible.
>
>
> /**
>  * Check and set 'configuration' if necessary.
>  *
>  * @param theObject object for which to set configuration
>  * @param conf Configuration
>  */
> public static void setConf(Object theObject, Configuration conf) {
>   if (conf != null) {
> if (theObject instanceof Configurable) {
>   ((Configurable) theObject).setConf(conf);
> }
> setJobConf(theObject, conf);
>   }
> }
>
>
> No issues if we store credential in LFS (localjceks://file). The problem
> only with jceks://hdfs/.
>
> Can I change Hadoop doc that we would not support storing credential in
> HDFS? Or Shall I handle this scenario only for statup issue?
>
>
> Thanks,
> Karthik
>


CredentialProvider API

2019-04-24 Thread Karthik P
Team,

Datanode is failed to restart after configuring credentials provider,
storing credential into HDFS (jceks://hdfs@hostname
:9001/credential/keys.jceks).

Getting a StackOverFlow error in datanode jsvc.out file similar to
HADOOP-11934 .

As per the documentation link
,
we support storing credential in HDFS.

*URI jceks://file|hdfs/path-to-keystore, is used to retrieve credentials
from a Java keystore. The underlying use of the Hadoop filesystem
abstraction allows credentials to be stored on the local filesystem or
within HDFS.*

Assume a scenario, where all of our data nodes were down and we configured
hadoop.security.credential.provider.path to HDFS location. So when we try
to get FileSystem.get() during datanode restart we end up doing recursive
call if HDFS is inaccessible.


/**
 * Check and set 'configuration' if necessary.
 *
 * @param theObject object for which to set configuration
 * @param conf Configuration
 */
public static void setConf(Object theObject, Configuration conf) {
  if (conf != null) {
if (theObject instanceof Configurable) {
  ((Configurable) theObject).setConf(conf);
}
setJobConf(theObject, conf);
  }
}


No issues if we store credential in LFS (localjceks://file). The problem
only with jceks://hdfs/.

Can I change Hadoop doc that we would not support storing credential in
HDFS? Or Shall I handle this scenario only for statup issue?


Thanks,
Karthik


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-04-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/

[Apr 23, 2019 6:21:13 AM] (wwei) YARN-9325. 
TestQueueManagementDynamicEditPolicy fails intermittent.
[Apr 23, 2019 7:45:42 AM] (ztang) SUBMARINE-40. Add TonY runtime to Submarine. 
Contributed by Keqiu Hu.
[Apr 23, 2019 9:33:58 AM] (ztang) YARN-9475. [YARN-9473] Create basic VE 
plugin. Contributed by Peter
[Apr 23, 2019 12:05:39 PM] (nanda) HDDS-1368. Cleanup old ReplicationManager 
code from SCM.
[Apr 23, 2019 3:34:14 PM] (nanda) HDDS-1411. Add unit test to check if SCM 
correctly sends close commands
[Apr 23, 2019 7:40:44 PM] (inigoiri) YARN-9339. Apps pending metric incorrect 
after moving app to a new
[Apr 23, 2019 10:27:04 PM] (gifuma) YARN-9491.
[Apr 23, 2019 10:42:56 PM] (gifuma) YARN-9501. 
TestCapacitySchedulerOvercommit#testReducePreemptAndCancel
[Apr 24, 2019 12:50:25 AM] (tasanuma) YARN-9081. Update jackson from 1.9.13 to 
2.x in




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.hdds.scm.net.TestNetworkTopologyImpl 
   hadoop.hdds.scm.net.TestNodeSchemaManager 
   hadoop.ozone.TestOzoneConfigurationFields 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-patch-pylint.txt
  [84K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1116/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-tony-runtime.txt
  [12K]
   

[jira] [Resolved] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16252.
-
   Resolution: Fixed
 Assignee: Ben Roling
Fix Version/s: 3.3.0

+1, committed to trunk. Thanks!

> Use configurable dynamo table name prefix in S3Guard tests
> --
>
> Key: HADOOP-16252
> URL: https://issues.apache.org/jira/browse/HADOOP-16252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
>
> Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes 
> it awkward to set up a least-privilege type AWS IAM user or role that can 
> successfully execute the full test suite.  You either have to know all the 
> specific hardcoded table names and give the user Dynamo read/write access to 
> those by name or just give blanket read/write access to all Dynamo tables in 
> the account.
> I propose the tests use a configuration property to specify a prefix for the 
> table names used.  Then the full test suite can be run by a user that is 
> given read/write access to all tables with names starting with the configured 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16274) transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable

2019-04-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16274:
---

 Summary: transient failure of 
ITestS3GuardToolDynamoDB.testDestroyUnknownTable
 Key: HADOOP-16274
 URL: https://issues.apache.org/jira/browse/HADOOP-16274
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


Experienced a transient failure of a test
{code}
[ERROR] 
testDestroyUnknownTable(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 143.671 s  <<< ERROR!
java.lang.IllegalArgumentException: Table ireland-team is not deleted.
{code}

* The test run blocked for a while; I'd assumed network problems, but maybe it 
was retrying
* verified on aWS console that the table was gone
* Not surfaced on reruns

I'm assuming this was transient, but anything going near creating tables runs a 
risk of creating bills. We need to move to on-demand table creation as soon as 
we upgrade the SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16273) Update mssql-jdbc to 7.2.2.jre8

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16273:


 Summary: Update mssql-jdbc to 7.2.2.jre8
 Key: HADOOP-16273
 URL: https://issues.apache.org/jira/browse/HADOOP-16273
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16272) Update HikariCP to 3.3.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16272:


 Summary: Update HikariCP to 3.3.1
 Key: HADOOP-16272
 URL: https://issues.apache.org/jira/browse/HADOOP-16272
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16271) Update okhttp to 3.14.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16271:


 Summary: Update okhttp to 3.14.1
 Key: HADOOP-16271
 URL: https://issues.apache.org/jira/browse/HADOOP-16271
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-04-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskChecker 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/301/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt