[jira] [Created] (HADOOP-16766) Increase timeout for RPCCallBenckmarik.testBenchmarkWithProto()

2019-12-16 Thread Zhenyu Zheng (Jira)
Zhenyu Zheng created HADOOP-16766:
-

 Summary: Increase timeout for 
RPCCallBenckmarik.testBenchmarkWithProto()
 Key: HADOOP-16766
 URL: https://issues.apache.org/jira/browse/HADOOP-16766
 Project: Hadoop Common
  Issue Type: Wish
Reporter: Zhenyu Zheng


Currently the timeout setting for 
org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto() is 20 
seconds, and on our ARM64 machine with 8 cores, it tooks about 60 seconds to 
finish the tests, so I want to propose to increase the timeout setting to 80 
seconds, this will not affect tests on other platforms as they will finish in 
less than 20 secs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-12-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/538/

No changes




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0   http://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Resolved] (HADOOP-16765) Fix curator dependencies for gradle projects using hadoop-minicluster

2019-12-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-16765.
--
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix curator dependencies for gradle projects using hadoop-minicluster
> -
>
> Key: HADOOP-16765
> URL: https://issues.apache.org/jira/browse/HADOOP-16765
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mate Szalay-Beko
>Assignee: Mate Szalay-Beko
>Priority: Major
> Fix For: 3.3.0
>
>
> *The Problem:*
> The Kudu unit tests that use the `MiniDFSCluster` are broken due to a guava 
> dependency issue in the `hadoop-minicluster` module.
> {code:java}
> java.lang.NoSuchMethodError: 
> com.google.common.util.concurrent.Futures.addCallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureCallback;)V
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.addResultCachingCallback(ThrottledAsyncChecker.java:167)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.schedule(ThrottledAsyncChecker.java:156)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:166)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2794)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2709)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1669)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:911)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.kudu.backup.HDFSTestKuduBackupLister.setUp(TestKuduBackupLister.scala:216)
> {code}
> The issue in that change is that even though Guava was excluded from the 
> `curator-client` module, just below that the `curator-framework` module is 
> defined and doesn't exclude Gauva:
> [https://github.com/apache/hadoop/blob/fc97034b29243a0509633849de55aa734859/hadoop-project/pom.xml#L1391-L1414]
> This causes Guava 27.0.1-jre to be pulled in instead of Guava 11.0.2 defined 
> by Hadoop:
> {noformat}
> +--- org.apache.hadoop:hadoop-minicluster:3.1.1.7.1.0.0-SNAPSHOT
> |+--- org.apache.hadoop:hadoop-common:3.1.1.7.1.0.0-SNAPSHOT
> ||+--- org.apache.hadoop:hadoop-annotations:3.1.1.7.1.0.0-SNAPSHOT
> ||+--- com.google.guava:guava:11.0.2 -> 27.0.1-jre
> {noformat}
> {noformat}
> +--- org.apache.curator:curator-framework:4.2.0
> |\--- org.apache.curator:curator-client:4.2.0
> | +--- org.apache.zookeeper:zookeeper:3.5.4-beta -> 
> 3.5.5.7.1.0.0-SNAPSHOT (*)
> | +--- com.google.guava:guava:27.0.1-jre (*)
> | \--- org.slf4j:slf4j-api:1.7.25{noformat}
>  
> *The root cause:*
> I was able to reproduce this issue with some dummy projects, see 
> [https://github.com/symat/transitive-dependency-test]
> It seems that gradle behaves in this case differently than maven. If someone 
> is using maven, then he will not see this problem, as the exclude rules 
> defined for the {{curator-client}} will be enforced even if the 
> {{curator-client}} comes transitively through the {{curator-framework}}. 
> While using the hadoop-minicluster in a gradle project will lead to this 
> problem (unless extra excludes / dependencies gets defined in the gradle 
> project).
> *The proposed solution* is to add the exclude rules for all Curator 
> dependencies, preventing other gradle projects using Hadoop from breaking 
> because of the Curator upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1352/

[Dec 15, 2019 4:28:04 PM] (snemeth) YARN-9923. Introduce HealthReporter 
interface to support multiple health




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() 
calls Thread.sleep() with a lock held At DirectoryScanner.java:lock held At 
DirectoryScanner.java:[line 441] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown 
   

[jira] [Resolved] (HADOOP-16450) ITestS3ACommitterFactory failing, S3 client is not inconsistent

2019-12-16 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16450.
-
Resolution: Fixed

> ITestS3ACommitterFactory failing, S3 client is not inconsistent
> ---
>
> Key: HADOOP-16450
> URL: https://issues.apache.org/jira/browse/HADOOP-16450
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> Transient failure of {{ITestS3ACommitterFactory}} during a sequential run; 
> the FS created wasn't inconsistent
> That test suite doesn't override the superclass AbstractCommitITest's 
> {{useInconsistentClient}} method, so declares that it expects one. If we have 
> it return false, it won't care any more what kind of FS client it gets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16765) Fix curator dependencies for gradle projects using hadoop-minicluster

2019-12-16 Thread Mate Szalay-Beko (Jira)
Mate Szalay-Beko created HADOOP-16765:
-

 Summary: Fix curator dependencies for gradle projects using 
hadoop-minicluster
 Key: HADOOP-16765
 URL: https://issues.apache.org/jira/browse/HADOOP-16765
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Mate Szalay-Beko


*The Problem:*

The Kudu unit tests that use the `MiniDFSCluster` are broken due to a guava 
dependency issue in the `hadoop-minicluster` module.
{code:java}
java.lang.NoSuchMethodError: 
com.google.common.util.concurrent.Futures.addCallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureCallback;)V
at 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.addResultCachingCallback(ThrottledAsyncChecker.java:167)
at 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.schedule(ThrottledAsyncChecker.java:156)
at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:166)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2794)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2709)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1669)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:911)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
at 
org.apache.kudu.backup.HDFSTestKuduBackupLister.setUp(TestKuduBackupLister.scala:216)
{code}
The issue in that change is that even though Guava was excluded from the 
`curator-client` module, just below that the `curator-framework` module is 
defined and doesn't exclude Gauva:
[https://github.com/apache/hadoop/blob/fc97034b29243a0509633849de55aa734859/hadoop-project/pom.xml#L1391-L1414]

This causes Guava 27.0.1-jre to be pulled in instead of Guava 11.0.2 defined by 
Hadoop:
{noformat}
+--- org.apache.hadoop:hadoop-minicluster:3.1.1.7.1.0.0-SNAPSHOT
|+--- org.apache.hadoop:hadoop-common:3.1.1.7.1.0.0-SNAPSHOT
||+--- org.apache.hadoop:hadoop-annotations:3.1.1.7.1.0.0-SNAPSHOT
||+--- com.google.guava:guava:11.0.2 -> 27.0.1-jre
{noformat}
{noformat}
+--- org.apache.curator:curator-framework:4.2.0
|\--- org.apache.curator:curator-client:4.2.0
| +--- org.apache.zookeeper:zookeeper:3.5.4-beta -> 
3.5.5.7.1.0.0-SNAPSHOT (*)
| +--- com.google.guava:guava:27.0.1-jre (*)
| \--- org.slf4j:slf4j-api:1.7.25{noformat}
 

*The root cause:*

I was able to reproduce this issue with some dummy projects, see 
[https://github.com/symat/transitive-dependency-test]

It seems that gradle behaves in this case differently than maven. If someone is 
using maven, then he will not see this problem, as the exclude rules defined 
for the {{curator-client}} will be enforced even if the {{curator-client}} 
comes transitively through the {{curator-framework}}. While using the 
hadoop-minicluster in a gradle project will lead to this problem (unless extra 
excludes / dependencies gets defined in the gradle project).

*The proposed solution* is to add the exclude rules for all Curator 
dependencies, preventing other gradle projects using Hadoop from breaking 
because of the Curator upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org