Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-01-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/

[Jan 9, 2022 8:41:10 AM] (noreply) HDFS-16404. Fix typo for CachingGetSpaceUsed 
(#3844). Contributed by tomscut.
[Jan 9, 2022 6:01:47 PM] (noreply) HADOOP-14334. S3 SSEC tests to downgrade 
when running against a mandatory encryption object store (#3870)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.yarn.csi.client.TestCsiClient 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-compile-javac-root.txt
 [348K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/746/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-18077) ProfileOutputServlet unable to proceed due to NPE

2022-01-10 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18077:
-

 Summary: ProfileOutputServlet unable to proceed due to NPE
 Key: HADOOP-18077
 URL: https://issues.apache.org/jira/browse/HADOOP-18077
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ProfileOutputServlet context doesn't have Hadoop configs available and hence 
async profiler redirection to output servlet is failing to identify if admin 
access is allowed:
{code:java}
HTTP ERROR 500 java.lang.NullPointerException
URI:    /prof-output-hadoop/async-prof-pid-98613-cpu-2.html
STATUS:    500
MESSAGE:    java.lang.NullPointerException
SERVLET:    org.apache.hadoop.http.ProfileOutputServlet-58c34bb3
CAUSED BY:    java.lang.NullPointerException
Caused by:
java.lang.NullPointerException
    at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1619)
    at 
org.apache.hadoop.http.ProfileOutputServlet.doGet(ProfileOutputServlet.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
    at 
org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    at org.eclipse.jetty.server.Server.handle(Server.java:516)
    at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
    at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
    at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
    at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2022-01-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16410.

Resolution: Duplicate

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2022-01-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16410:


Reopening this to closing as duplicate.

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18076) WrappedStream and S3Object must be closed during S3AInputStream.close()

2022-01-10 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-18076:
--

 Summary: WrappedStream and S3Object must be closed during 
S3AInputStream.close()
 Key: HADOOP-18076
 URL: https://issues.apache.org/jira/browse/HADOOP-18076
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.3.1
Reporter: Mukund Thakur






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17954) org.apache.spark.SparkException: Task failed while writing rows S3

2022-01-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17954.
-
Resolution: Cannot Reproduce

> org.apache.spark.SparkException: Task failed while writing rows S3
> --
>
> Key: HADOOP-17954
> URL: https://issues.apache.org/jira/browse/HADOOP-17954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: sudarshan
>Priority: Major
>
> I am trying to run spark job (1.6.0) which reads rows from HBASE and does 
> some transformation and finally writes to S3 .
> Some time i can notice error because of time out .
> Task is able to write to S3 but at last stage it fails 
> Here is the error details 
> Its intermittent issue but most of the time i see this error .
>  
> {code:java}
> Job aborted due to stage failure: Task 1074 in stage 1.0 failed 4 times, most 
> recent failure: Lost task 1074.3 in stage 1.0 (TID 2162, 
> abcd.ecom.bigdata.int.abcd.com, executor 18): 
> org.apache.spark.SparkException: Task failed while writing rowsJob aborted 
> due to stage failure: Task 1074 in stage 1.0 failed 4 times, most recent 
> failure: Lost task 1074.3 in stage 1.0 (TID 2162, 
> abcd.ecom.bigdata.int.abcd.com, executor 18): 
> org.apache.spark.SparkException: Task failed while writing rows at 
> org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:417)
>  at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:148)
>  at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:148)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at 
> org.apache.spark.scheduler.Task.run(Task.scala:89) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> org.apache.hadoop.fs.s3a.AWSS3IOException: saving output on 
> common/hbaseHistory/metadataSept100621/_temporary/_attempt_202110060911_0001_m_001074_3/year=2021/month=09/submitDate=2021-09-08T04%3a58%3a47Z/part-r-01074-205c8b21-7840-4985-bb0e-65ed787c337d.snappy.parquet:
>  com.cloudera.com.amazonaws.services.s3.model.AmazonS3Exception: Your socket 
> connection to the server was not read from or written to within the timeout 
> period. Idle connections will be closed. (Service: Amazon S3; Status Code: 
> 400; Error Code: RequestTimeout; Request ID: 5J85XRNF10W16ZJS), S3 Extended 
> Request ID: 
> 4g08KHEDbFs5jueJqt9Snw7Xlmw5VeS1eCtJyAzp0fzHGinMhBntwIEhddJP7LLaS0teR3EAuOI=: 
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID: 5J85XRNF10W16ZJS) 
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:143) at 
> org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:123) at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at 
> parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:470) at 
> parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:112)
>  at parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetRelation.scala:101)
>  at 
> org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply$mcV$sp(WriterContainer.scala:387)
>  at 
> org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply(WriterContainer.scala:343)
>  at 
> org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer$$anonfun$writeRows$4.apply(WriterContainer.scala:343)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1278)
>  at 
> org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:409)
>  ... 8 more Suppressed: java.lang.NullPointerException at 
> parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:152)
>  at 
> parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:111)
>  at parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 

[jira] [Created] (HADOOP-18075) ABFS: Fix failure caused by listFiles() in ITestAbfsRestOperationException

2022-01-10 Thread Sumangala Patki (Jira)
Sumangala Patki created HADOOP-18075:


 Summary: ABFS: Fix failure caused by listFiles() in 
ITestAbfsRestOperationException
 Key: HADOOP-18075
 URL: https://issues.apache.org/jira/browse/HADOOP-18075
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.2
Reporter: Sumangala Patki
Assignee: Sumangala Patki


testAbfsRestOperationExceptionFormat in ITestAbfsRestOperationException fails 
due to the wrong exception format of the FileNotFound exception. The test 
invokes the Filesystem method listFiles(), and the exception thrown is found to 
be of the GetPathStatus format instead of ListStatus (difference in number of 
error fields in response).

The Filesystem implementation of listFiles() calls listLocatedStatus(), which 
then makes a listStatus call. A recent check-in that added implementation for 
listLocatedStatus() in ABFS driver included a GetFileStatus request before 
ListStatus api are invoked, leading to the aberrant FNF exception format. The 
fix eliminates the GetPathStatus request before ListStatus is called.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-01-10 Thread Philippe Lanoe (Jira)
Philippe Lanoe created HADOOP-18074:
---

 Summary: Partial/Incomplete groups list can be returned in LDAP 
groups lookup
 Key: HADOOP-18074
 URL: https://issues.apache.org/jira/browse/HADOOP-18074
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Philippe Lanoe


Hello,

The  
{code:java}
Set doGetGroups(String user, int goUpHierarchy) {code}
method in

[https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]

Looks like having an issue if in the middle of the loop a *NamingException* is 
caught:

The groups variable is not reset in the catch clause and therefore the fallback 
lookup cannot be executed (when goUpHierarchy==0 at least):
||
{code:java}
if (groups.isEmpty() || goUpHierarchy > 0) {
groups = lookupGroup(result, c, goUpHierarchy);
}
{code}
 

Consequence is that only a partial list of groups is returned, which is not 
correct.

Following options could be used as solution:
 * Reset the group to an empty list in the catch clause, to trigger the 
fallback query.
 * Add an option flag to enable ignoring groups with Naming Exception (since 
they are not groups most probably)

Independently, would any issue also occur (and therefore full list cannot be 
returned) in the first lookup as well as in the fallback query, the method 
should/could(with option flag) throw an Exception, because in some scenario 
accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18066) AbstractJavaKeyStoreProvider: need a way to read credential store password from Configuration

2022-01-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HADOOP-18066.
---
Resolution: Invalid

> AbstractJavaKeyStoreProvider: need a way to read credential store password 
> from Configuration
> -
>
> Key: HADOOP-18066
> URL: https://issues.apache.org/jira/browse/HADOOP-18066
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: security
>Reporter: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Codepath in focus is 
> [this|https://github.com/apache/hadoop/blob/c3006be516ce7d4f970e24e7407b401318ceec3c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java#L316]
> {code}
>   password = ProviderUtils.locatePassword(CREDENTIAL_PASSWORD_ENV_VAR,
>   conf.get(CREDENTIAL_PASSWORD_FILE_KEY));
> {code}
> Since HIVE-14822, we can use custom keystore that Hiveserver2 propagates to 
> jobs/tasks of different execution engines (mr, tez, spark).
> We're able to pass any "jceks:" url, but not a password, e.g. on this 
> codepath:
> {code}
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed
>   at com.sun.crypto.provider.JceKeyStore.engineLoad(JceKeyStore.java:879) 
> ~[sunjce_provider.jar:1.8.0_232]
>   at java.security.KeyStore.load(KeyStore.java:1445) ~[?:1.8.0_232]
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.locateKeystore(AbstractJavaKeyStoreProvider.java:326)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:86)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2409)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2347) 
> ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getPasswordString(AbfsConfiguration.java:295)
>  ~[hadoop-azure-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:525)
>  ~[hadoop-azure-3.1.1.7.1.7.0-551.jar:?]
> {code}
> Even there is a chance of reading a text file, it's not secure, we need to 
> try reading a Configuration property first and if it's null, we can go to the 
> environment variable.
> Hacking the System.getenv() is only possible with reflection, doesn't look so 
> good.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-01-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-compile-javac-root.txt
  [476K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-mvnsite-root.txt
  [556K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [224K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [424K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [120K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/538/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]