[jira] [Created] (HADOOP-17756) Increase precommit job timeout from 20 hours to 24 hours.

2021-06-09 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HADOOP-17756:
-

 Summary: Increase precommit job timeout from 20 hours to 24 hours.
 Key: HADOOP-17756
 URL: https://issues.apache.org/jira/browse/HADOOP-17756
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


If QA runs for the whole project, it may not be finished within 20 hours. I 
suggest extending the timeout to 1day.
* https://github.com/apache/hadoop/pull/3049 : 17 hours. (finished)
* https://github.com/apache/hadoop/pull/3087 : 20 hours. (timeout)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-06-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/182/

No changes




-1 overall


The following subsystems voted -1:
asflicense blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-06-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/

[Jun 8, 2021 3:11:08 AM] (noreply) HADOOP-17727. Modularize docker images 
(#3043)
[Jun 8, 2021 7:14:06 AM] (noreply) HDFS-16048. RBF: Print network topology on 
the router web (#3062)
[Jun 8, 2021 1:03:43 PM] (821684824) YARN-10807. Parents node labels are 
incorrectly added to child queues in weight mode. Contributed by Benjamin Teke.
[Jun 8, 2021 3:07:40 PM] (pjosephraj) YARN-10792. Set Completed AppAttempt 
LogsLink to Log Server URL. Contributed by Abhinaba Sarkar
[Jun 8, 2021 4:09:31 PM] (noreply) HDFS-16042. DatanodeAdminMonitor scan should 
be delay based (#3058)
[Jun 8, 2021 8:56:40 PM] (noreply) HADOOP-17631. Configuration 
${env.VAR:-FALLBACK} to eval FALLBACK when restrictSystemProps=true (#2977)
[Jun 8, 2021 9:03:03 PM] (noreply) HADOOP-17725. Improve error message for 
token providers in ABFS (#3041)




-1 overall


The following subsystems voted -1:
asflicense blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   
hadoop.yarn.server.timelineservice.storage.common.TestHBaseTimelineStorageUtils 
   hadoop.yarn.csi.client.TestCsiClient 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-compile-javac-root.txt
 [380K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-checkstyle-root.txt
 [16M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client.txt
 [24K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]

   asflicense:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/533/artifact/out/results-asflicense.txt
 [4.0K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.3.1 RC3

2021-06-09 Thread Stack
+1



* Signature: ok

* Checksum : ok

* Rat check (1.8.0_191): ok

 - mvn clean apache-rat:check

* Built from source (1.8.0_191): ok

 - mvn clean install -DskipTests


Ran a ten node cluster w/ hbase on top running its verification loadings w/
(gentle) chaos. Had trouble getting the rig running but mostly pilot error
and none that I could particularly attribute to hdfs after poking in logs.

Messed in UI and shell some. Nothing untoward.

Wei-Chiu fixed broke tests over in hbase and complete runs are pretty much
there (a classic flakie seems more-so on 3.3.1... will dig in more on why).


Thanks,

S


On Tue, Jun 1, 2021 at 3:29 AM Wei-Chiu Chuang  wrote:

> Hi community,
>
> This is the release candidate RC3 of Apache Hadoop 3.3.1 line. All blocker
> issues have been resolved [1] again.
>
> There are 2 additional issues resolved for RC3:
> * Revert "MAPREDUCE-7303. Fix TestJobResourceUploader failures after
> HADOOP-16878
> * Revert "HADOOP-16878. FileUtil.copy() to throw IOException if the source
> and destination are the same
>
> There are 4 issues resolved for RC2:
> * HADOOP-17666. Update LICENSE for 3.3.1
> * MAPREDUCE-7348. TestFrameworkUploader#testNativeIO fails. (#3053)
> * Revert "HADOOP-17563. Update Bouncy Castle to 1.68. (#2740)" (#3055)
> * HADOOP-17739. Use hadoop-thirdparty 1.1.1. (#3064)
>
> The Hadoop-thirdparty 1.1.1, as previously mentioned, contains two extra
> fixes compared to hadoop-thirdparty 1.1.0:
> * HADOOP-17707. Remove jaeger document from site index.
> * HADOOP-17730. Add back error_prone
>
> *RC tag is release-3.3.1-RC3
> https://github.com/apache/hadoop/releases/tag/release-3.3.1-RC3
>
> *The RC3 artifacts are at*:
> https://home.apache.org/~weichiu/hadoop-3.3.1-RC3/
> ARM artifacts: https://home.apache.org/~weichiu/hadoop-3.3.1-RC3-arm/
>
> *The maven artifacts are hosted here:*
> https://repository.apache.org/content/repositories/orgapachehadoop-1320/
>
> *My public key is available here:*
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
>
> Things I've verified:
> * all blocker issues targeting 3.3.1 have been resolved.
> * stable/evolving API changes between 3.3.0 and 3.3.1 are compatible.
> * LICENSE and NOTICE files checked
> * RELEASENOTES and CHANGELOG
> * rat check passed.
> * Built HBase master branch on top of Hadoop 3.3.1 RC2, ran unit tests.
> * Built Ozone master on top fo Hadoop 3.3.1 RC2, ran unit tests.
> * Extra: built 50 other open source projects on top of Hadoop 3.3.1 RC2.
> Had to patch some of them due to commons-lang migration (Hadoop 3.2.0) and
> dependency divergence. Issues are being identified but so far nothing
> blocker for Hadoop itself.
>
> Please try the release and vote. The vote will run for 5 days.
>
> My +1 to start,
>
> [1] https://issues.apache.org/jira/issues/?filter=12350491
> [2]
>
> https://github.com/apache/hadoop/compare/release-3.3.1-RC1...release-3.3.1-RC3
>


[jira] [Created] (HADOOP-17755) EOF reached error reading ORC file on S3A

2021-06-09 Thread Arghya Saha (Jira)
Arghya Saha created HADOOP-17755:


 Summary: EOF reached error reading ORC file on S3A
 Key: HADOOP-17755
 URL: https://issues.apache.org/jira/browse/HADOOP-17755
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.0
 Environment: Hadoop 3.2.0
Reporter: Arghya Saha


Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s 
and using s3a

I have around 700 GB of data to read and around 200 executors (5 vCore and 30G 
each).

Its able to read most of the files in problematic stage (Scan orc => Filter => 
Project) but is failing with few files at the end with below error. 

I am able to read and rewrite the specific file mentioned which suggest the 
file is not corrupted.

Let me know if further information is required

 
{code:java}
java.io.IOException: Error reading file: 
s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException:
 Error reading file: 
s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc
 at org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) 
at 
org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
 at 
org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96)
 at 
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177)
 at 
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) 
at org.apache.spark.scheduler.Task.run(Task.scala:131) at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at 
java.base/java.lang.Thread.run(Unknown Source)Caused by: java.io.EOFException: 
End of file reached before reading fully. at 
org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) at 
org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566)
 at 
org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285)
 at 
org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237)
 at org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) 
at 
org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256) 
at 
org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1291)
 at org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1327) 
... 20 more
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17754) Remove lock contention in overlay of Configuration

2021-06-09 Thread Xuesen Liang (Jira)
Xuesen Liang created HADOOP-17754:
-

 Summary: Remove lock contention in overlay of Configuration
 Key: HADOOP-17754
 URL: https://issues.apache.org/jira/browse/HADOOP-17754
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xuesen Liang


The *overlay* field of class *Configuration* is a point of lock contention, 
which is bad for performance.

E.g.,
{code:java}
$ grep 'waiting to lock <0x7fa4fc113378>' 17326.jstack | uniq -c 257 - 
waiting to lock <0x7fa4fc113378> (a org.apache.hadoop.conf.Configuration)
{code}
and the thread stack is as follows:
{code:java}
"hconnection-0x66971f6b-shared--pool1-t1060" #6315 daemon prio=5 os_prio=0 
tid=0x7f5c04018800 nid=0x11f31 waiting for monitor entry 
[0x7f567f3f4000] java.lang.Thread.State: BLOCKED (on object monitor) at 
org.apache.hadoop.conf.Configuration.getOverlay(Configuration.java:1328) - 
waiting to lock <0x7fa4fc113378> (a org.apache.hadoop.conf.Configuration) 
at 
org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:684) 
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1088) at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1145) at 
org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1375) at 
org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory.getMetadataPriority(PhoenixRpcSchedulerFactory.java:92)
 at 
org.apache.hadoop.hbase.ipc.controller.MetadataRpcController.(MetadataRpcController.java:59)
 at 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory.getController(ClientRpcControllerFactory.java:57)
 at 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory.newController(ClientRpcControllerFactory.java:41)
 at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:216) 
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65) 
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
 at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:365)
 at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:339)
 at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
 at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



reviewers needed for "HADOOP-17752. Remove lock contention in REGISTRY of Configuration "

2021-06-09 Thread Steve Loughran
Can some people who understand weak references  take a look at this PR:
https://github.com/apache/hadoop/pull/3085

It's aim is to remove a lock in Configuration construction, so could speed
construction up. It does get fairly complicated though, which is why it
needs many eyeballs.

We've hit scale issues with Configuration in the recent past (different
hive threads creating FS instances). I don't think this was where the
problems surfaced, but given how broadly Configuration is used: (a) better
scale is good and (b) we don't want to break it

-steve


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-06-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-mvnsite-root.txt
  [576K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [48K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [452K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [96K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/324/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
 

[jira] [Created] (HADOOP-17753) Keep restrict-imports-enforcer-rule for Guava Lists in hadoop-main pom

2021-06-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17753:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Lists in 
hadoop-main pom
 Key: HADOOP-17753
 URL: https://issues.apache.org/jira/browse/HADOOP-17753
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org