[jira] [Created] (HADOOP-17371) Bump Jetty to the latest version 9.4.34

2020-11-09 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-17371:


 Summary: Bump Jetty to the latest version 9.4.34
 Key: HADOOP-17371
 URL: https://issues.apache.org/jira/browse/HADOOP-17371
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17370) Upgrade commons-compress to 1.20

2020-11-09 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created HADOOP-17370:
--

 Summary: Upgrade commons-compress to 1.20
 Key: HADOOP-17370
 URL: https://issues.apache.org/jira/browse/HADOOP-17370
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.2.1, 3.3.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17369) Bump up snappy-java to 1.1.8.1

2020-11-09 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17369:
-

 Summary: Bump up snappy-java to 1.1.8.1
 Key: HADOOP-17369
 URL: https://issues.apache.org/jira/browse/HADOOP-17369
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


The native libraries provided by snappy-java does not work on some distros on 
aarch64 and ppc64le. Upgrading snappy-java should fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17368) Zookeeper secret manager attempts to reuse token sequence numbers

2020-11-09 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17368:
--

 Summary: Zookeeper secret manager attempts to reuse token sequence 
numbers
 Key: HADOOP-17368
 URL: https://issues.apache.org/jira/browse/HADOOP-17368
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


[~daryn] reported that the ZK delegation token secret manager uses a 
{{SharedCounter}} to synchronize increments of a monotonically increasing 
sequence number for new tokens. Yet the KMS logs occasionally, depending on 
load, contains an odd error indicating collisions: 


{code:bash}
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = 
NodeExists for /zkdtsm/ZKDTSMRoot/ZKDTSMTokensRoot/DT_137547444
{code}


ZKDTSM does a CAS get and set of the sequence number. Rather than return the 
value it set, it returns the current value which may have already been 
incremented by another KMS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2020-11-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/53/

No changes




-1 overall


The following subsystems voted -1:
blanks findbugs mvnsite pathlen shadedclient unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 

findbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 

Re: Intervention. Stabilizing Yetus (Attn. Azure)

2020-11-09 Thread Ayush Saxena
The failing Azure tests are being tracked at HADOOP-17325

https://issues.apache.org/jira/browse/HADOOP-17325

On Mon, 9 Nov 2020 at 23:02, Ahmed Hussein  wrote:

> I created new Jiras for HDFS failures. Please consider doing the same for
> Yarn and Azure.
> For convenience, the list of failures in the qbt report is as follows:
>
> Test Result
> <
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/
> >
> (50
> failures / -7)
>
>-
>
>  
> org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testGetCachedDatanodeReport
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpcMultiDestination/testGetCachedDatanodeReport/
> >
>-
>
>  
> org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testNamenodeMetrics
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpcMultiDestination/testNamenodeMetrics/
> >
>-
>
>  
> org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testErasureCoding
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.hdfs.server.federation.router/TestRouterRpcMultiDestination/testErasureCoding/
> >
>-
>
>  
> org.apache.hadoop.hdfs.server.datanode.TestBPOfferService.testMissBlocksWhenReregister
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestBPOfferService/testMissBlocksWhenReregister/
> >
>-
> org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning[Testing
>with: SYNTH,
>
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler,
>(nodeFile null)]
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.yarn.sls/TestReservationSystemInvariants/testSimulatorRunning_Testing_with__SYNTH__org_apache_hadoop_yarn_server_resourcemanager_scheduler_fair_FairScheduler___nodeFile_null__/
> >
>-
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.testAMSimulator[1]
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.yarn.sls.appmaster/TestAMSimulator/testAMSimulator_1_/
> >
>-
>
>  
> org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testTokenThreadTimeout
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.yarn.server.resourcemanager.security/TestDelegationTokenRenewer/testTokenThreadTimeout/
> >
>-
>
>  
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithOpportunisticContainers
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithOpportunisticContainers/
> >
>-
>
>  
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithEnforceExecutionType
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithEnforceExecutionType/
> >
>- org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.fs.azure/TestBlobMetadata/testFolderMetadata/
> >
>-
>
>  org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.fs.azure/TestBlobMetadata/testFirstContainerVersionMetadata/
> >
>- org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.fs.azure/TestBlobMetadata/testPermissionMetadata/
> >
>- org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.fs.azure/TestBlobMetadata/testOldPermissionMetadata/
> >
>-
>
>  
> org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible
><
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/319/testReport/junit/org.apache.hadoop.fs.azure/TestNativeAzureFileSystemConcurrency/testNoTempBlobsVisible/
> >
>-
>
>  org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs
><
> 

[jira] [Resolved] (HADOOP-17066) S3A staging committer committing duplicate files

2020-11-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17066.
-
Resolution: Duplicate

HADOOP-17318 covers the duplicate job problem everywhere in the committer and, 
combined with a change in spark, should go away.

This is an intermittent issue as it depends on the timing you launch stages 
and, for task staging dir conflict, whether two tasks attempts of conflicting 
jobs are launched at the same time.

> S3A staging committer committing duplicate files
> 
>
> Key: HADOOP-17066
> URL: https://issues.apache.org/jira/browse/HADOOP-17066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> SPARK-39111 reporting concurrent jobs double writing files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17367) Improve TLS/SSL default settings for security and performance

2020-11-09 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17367:
--

 Summary: Improve TLS/SSL default settings for security and 
performance
 Key: HADOOP-17367
 URL: https://issues.apache.org/jira/browse/HADOOP-17367
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


[~kihwal] reported that {{HttpServer2}} is still accepting TLS 1.1 or 1.0. 
These are only rejected when the java security setting excludes them. The 
expensive algorithms arte still being used.


{code:bash}
main, WRITE: TLSv1.2 Handshake, length = 239
main, READ: TLSv1.2 Handshake, length = 1508
*** ServerHello, TLSv1.2
...
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
{code}

SSLFactory calls {{sslEngine.setEnabledCipherSuites()}} to set enabled ciphers.
Apparently this does not disable unincluded ciphers, so SSLFactory's cipher 
disabling feature does not work. Or it could be jetty's undoing.

Jetty9 introduced SSLContextFactory. Following methods can be used.

{code:java}
setExcludeCipherSuites()
setExcludeProtocols()
setIncludeCipherSuites()
setIncludeProtocols()
{code}

SSLFactory is not used by HttpServer2. It is only used by 
{{DatanodeHttpServer}} and {{ShuffleHandler}}. The reloading feature is also 
broken for the same reason.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Intervention. Stabilizing Yetus (Attn. Azure)

2020-11-09 Thread Ahmed Hussein
I created new Jiras for HDFS failures. Please consider doing the same for
Yarn and Azure.
For convenience, the list of failures in the qbt report is as follows:

Test Result

(50
failures / -7)

   -
   
org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testGetCachedDatanodeReport
   

   -
   
org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testNamenodeMetrics
   

   -
   
org.apache.hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination.testErasureCoding
   

   -
   
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService.testMissBlocksWhenReregister
   

   - 
org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning[Testing
   with: SYNTH,
   org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler,
   (nodeFile null)]
   

   - org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.testAMSimulator[1]
   

   -
   
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testTokenThreadTimeout
   

   -
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithOpportunisticContainers
   

   -
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithEnforceExecutionType
   

   - org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata
   

   -
   org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata
   

   - org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata
   

   - org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata
   

   -
   
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible
   

   -
   org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs
   

   -
   
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatusRootDir
   

   -
   

Intervention. Stabilizing Yetus (Attn. Azure)

2020-11-09 Thread Ahmed Hussein
Hello folks,

Over the last month, there has been concern about the stability of Hadoop.
Looking at the latest QBT report (Nov 8th, 2020 1:39 AM)
,
there were 50 failing tests, 41 of which are in "hadoop-azure" module.
Thanks to the effort of the community, the yetus qbt report looks better by
miles. However, it will be highly appreciated if some developers volunteer
some time to take a look at the hadoop-azure.

If tests in fs.azure are irrelevant to active contributors, then please
consider disabling those tests to save resources and avoid side effects of
those failures on the other modules (busy CPUs, memory release, ports
listening..etc).

Thank you.

-- 
Best Regards,

*Ahmed Hussein, PhD*


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-11-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/

No changes




-1 overall


The following subsystems voted -1:
pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.TestStripedFileAppend 
   hadoop.hdfs.TestGetFileChecksum 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.TestSpeculativeExecution 
   hadoop.mapred.nativetask.kvtest.KVTest 
   hadoop.streaming.TestSymLink 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
   hadoop.fs.azure.TestBlobMetadata 
   hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
   hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
   hadoop.fs.azure.TestWasbFsck 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/whitespace-tabs.txt
  [2.0M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/diff-javadoc-javadoc-root.txt
  [2.0M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [356K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [476K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [100K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/320/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   

[jira] [Created] (HADOOP-17366) hadoop-cloud-storage transient dependencies need review

2020-11-09 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17366:
---

 Summary: hadoop-cloud-storage transient dependencies need review
 Key: HADOOP-17366
 URL: https://issues.apache.org/jira/browse/HADOOP-17366
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs, fs/azure
Affects Versions: 3.4.0
Reporter: Steve Loughran


A review of the hadoop cloud storage dependencies shows that things are 
creeping in there

{code}
[INFO] |  +- org.apache.hadoop:hadoop-cloud-storage:jar:3.1.4:compile
[INFO] |  |  +- (org.apache.hadoop:hadoop-annotations:jar:3.1.4:compile - 
omitted for duplicate)
[INFO] |  |  +- org.apache.hadoop:hadoop-aliyun:jar:3.1.4:compile
[INFO] |  |  |  \- com.aliyun.oss:aliyun-sdk-oss:jar:3.4.1:compile
[INFO] |  |  | +- org.jdom:jdom:jar:1.1:compile
[INFO] |  |  | +- org.codehaus.jettison:jettison:jar:1.1:compile
[INFO] |  |  | |  \- stax:stax-api:jar:1.0.1:compile
[INFO] |  |  | +- com.aliyun:aliyun-java-sdk-core:jar:3.4.0:compile
[INFO] |  |  | +- com.aliyun:aliyun-java-sdk-ram:jar:3.0.0:compile
[INFO] |  |  | +- com.aliyun:aliyun-java-sdk-sts:jar:3.0.0:compile
[INFO] |  |  | \- com.aliyun:aliyun-java-sdk-ecs:jar:4.2.0:compile
[INFO] |  |  +- (org.apache.hadoop:hadoop-aws:jar:3.1.4:compile - omitted for 
duplicate)
[INFO] |  |  +- (org.apache.hadoop:hadoop-azure:jar:3.1.4:compile - omitted for 
duplicate)
[INFO] |  |  +- org.apache.hadoop:hadoop-azure-datalake:jar:3.1.4:compile
[INFO] |  |  |  \- 
com.microsoft.azure:azure-data-lake-store-sdk:jar:2.2.7:compile
[INFO] |  |  | \- (org.slf4j:slf4j-api:jar:1.7.21:compile - omitted for 
conflict with 1.7.30)
{code}

Need to review and cut things which come in hadoop-common (slf4j, maybe some of 
the allyun stuff)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17364) Hadoop GPG key is not valid

2020-11-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17364.

Resolution: Done

> Hadoop GPG key is not valid
> ---
>
> Key: HADOOP-17364
> URL: https://issues.apache.org/jira/browse/HADOOP-17364
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Philipp Dallig
>Priority: Blocker
>
> Hi,
> I build regular hadoop images.
> At the moment the KEYS file 
> ([https://dist.apache.org/repos/dist/release/hadoop/common/KEYS]) is invalid.
> Error-Message:
> {code:java}
> gpg: key FC8D04357BB49FF0: public key "Sammi Chen (CODE SIGNING KEY) 
> " imported
> gpg: invalid armor header: 
> mQINBF9U5ZcBEADJS2a8ihhZtN1wXOJfyLZreuHL9HJxRvogQbhrhpFQrKAusdf2\n
> gpg: CRC error; 95D523 - 51AC03
> gpg: packet(7) with unknown version 103
> gpg: read_block: read error: Unknown version in packet
> gpg: import from '/tmp/KEYS' failed: Invalid keyring
> gpg: Total number processed: 60
> gpg:   imported: 60
> gpg: no ultimately trusted keys found
> {code}
> Steps to reproduce:
>  * Install Docker
>  * Create a Dockerfile with the following Content
>  ** 
> {code:java}
> FROM alpine:3.12
> RUN set -ex && \
> /sbin/apk add --no-cache wget gnupg tar && \
> # Install Hadoop
> /usr/bin/wget -nv 
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS -O /tmp/KEYS && 
> \
> /usr/bin/gpg --import /tmp/KEYS
>  {code}
>  * Run docker build
>  ** 
> {code:java}
>  docker build -t hadoop-test -f Dockerfile .
> {code}
> Other users complaining about this error:
>  - 
> [https://askubuntu.com/questions/1290190/hadoop-gpg-key-is-not-valid-no-ultimately-trusted-keys-found]
>  - 
> [https://stackoverflow.com/questions/64719392/invalid-keyring-are-hadoop-gpg-keys-are-wrong]
> I hope for a quick fix, because automatic builds are currently blocked.
> If this duplicates another ticket, please close it and link it to the 
> original.
> Best Regards
>  Reamer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-11-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [272K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/111/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]
   

[jira] [Created] (HADOOP-17365) Contract test for renaming over existing file is too lenient

2020-11-09 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HADOOP-17365:
-

 Summary: Contract test for renaming over existing file is too 
lenient
 Key: HADOOP-17365
 URL: https://issues.apache.org/jira/browse/HADOOP-17365
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{AbstractContractRenameTest#testRenameFileOverExistingFile}} is too lenient in 
its assertions.

* {{FileAlreadyExistsException}} is accepted regardless of "rename overwrites" 
and "rename returns false if exists" contract options.  I think it should be 
accepted only if both of those options are false.
* "rename returns false if exists" option is ignored if the file is not 
overwritten by the implementation.

Also, I think the "rename returns false if exists" option is incorrectly 
inverted in the test, which it can get away with because the checks are loose.

(Found this while looking at a change in Ozone FS implementation from throwing 
exception to returning false.  The contract test unexpectedly passed without 
changing {{contract.xml}}.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17364) Hadoop GPG key is not valid

2020-11-09 Thread Philipp Dallig (Jira)
Philipp Dallig created HADOOP-17364:
---

 Summary: Hadoop GPG key is not valid
 Key: HADOOP-17364
 URL: https://issues.apache.org/jira/browse/HADOOP-17364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Philipp Dallig


Hi,

I build regular hadoop images.

At the moment the KEYS file 
(https://dist.apache.org/repos/dist/release/hadoop/common/KEYS) is invalid.

Error-Message:
 {code}
gpg: key FC8D04357BB49FF0: public key "Sammi Chen (CODE SIGNING KEY) 
" imported
gpg: invalid armor header: 
mQINBF9U5ZcBEADJS2a8ihhZtN1wXOJfyLZreuHL9HJxRvogQbhrhpFQrKAusdf2\n
gpg: CRC error; 95D523 - 51AC03
gpg: packet(7) with unknown version 103
gpg: read_block: read error: Unknown version in packet
gpg: import from '/tmp/KEYS' failed: Invalid keyring
gpg: Total number processed: 60
gpg:   imported: 60
gpg: no ultimately trusted keys found
{code}


Steps to reproduce:
 * Install Docker
 * Create a Dockerfile with the following Content
 ** 
{code}
FROM alpine:3.12RUN set -ex && \
/sbin/apk add --no-cache wget gnupg tar && \
# Install Hadoop
/usr/bin/wget -nv 
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS -O /tmp/KEYS && \
/usr/bin/gpg --import /tmp/KEYS
 {code}

 * Run docker build
 ** 
{code}
 docker build -t hadoop-test -f Dockerfile .
{code}

Other users complaining about this error:
 - 
https://askubuntu.com/questions/1290190/hadoop-gpg-key-is-not-valid-no-ultimately-trusted-keys-found
 - 
https://stackoverflow.com/questions/64719392/invalid-keyring-are-hadoop-gpg-keys-are-wrong

I hope for a quick fix, because automatic builds are currently blocked.

If this duplicates another ticket, please close it and link it to the original.

Best Regards
Reamer




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org