Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/

[Sep 5, 2018 4:53:42 AM] (xiao) HDFS-13812. Fix the inconsistent default 
refresh interval on Caching
[Sep 5, 2018 5:56:57 AM] (xyao) HDDS-268. Add SCM close container watcher. 
Contributed by Ajay Kumar.
[Sep 5, 2018 10:41:06 AM] (elek) HDDS-315. ozoneShell infoKey does not work for 
directories created as
[Sep 5, 2018 12:31:36 PM] (elek) HDDS-333. Create an Ozone Logo. Contributed by 
Priyanka Nagwekar.
[Sep 5, 2018 12:47:54 PM] (skumpf) YARN-8638. Allow linux container runtimes to 
be pluggable. Contributed
[Sep 5, 2018 1:05:33 PM] (nanda) HDDS-358. Use DBStore and TableStore for 
DeleteKeyService. Contributed
[Sep 5, 2018 3:33:27 PM] (yqlin) HDFS-13815. RBF: Add check to order command. 
Contributed by Ranith
[Sep 5, 2018 4:52:35 PM] (weichiu) HADOOP-15696. KMS performance regression due 
to too many open file
[Sep 5, 2018 5:50:25 PM] (gifuma) HADOOP-15707. Add IsActiveServlet to be used 
for Load Balancers.
[Sep 5, 2018 7:26:37 PM] (xyao) HDDS-303. Removing logic to identify containers 
to be closed from SCM.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/

[Sep 2, 2018 8:05:52 AM] (bibinchundatt) YARN-8535. Fix DistributedShell unit 
tests. Contributed by Abhishek
[Sep 2, 2018 6:47:32 PM] (aengineer) HDDS-357. Use DBStore and TableStore for 
OzoneManager non-background




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/

[Sep 3, 2018 6:56:34 AM] (msingh) HDDS-263. Add retries in Ozone Client to 
handle BlockNotCommitted
[Sep 3, 2018 8:58:31 AM] (vinayakumarb) HDFS-13867. RBF: Add validation for max 
arguments for Router admin ls,
[Sep 3, 2018 9:07:57 AM] (vinayakumarb) HDFS-13774. EC: 'hdfs ec -getPolicy' is 
not retrieving policy details
[Sep 3, 2018 11:32:55 AM] (elek) HDDS-336. Print out container location 
information for a specific ozone
[Sep 3, 2018 2:44:45 PM] (nanda) HDDS-343. Containers are stuck in closing 
state in scm. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  

understanding shellprofile.d vs hadoop-tools

2018-09-07 Thread Steve Loughran


This is my hadooprc


hadoop_add_to_classpath_tools hadoop-aws
hadoop_add_to_classpath_tools hadoop-azure
hadoop_add_to_classpath_tools hadoop-azuredatalake

it picks up hadoop-aws ok (branch-3.1 here), but I'm not getting the 
hadoop-azure ones, because they are actually making their way to 
libexec/shellprofile.d rather than hadoop-tools

Is this intentional? And if so, how can I add them to my CP?

Steve

(who is still defeated by classpaths)


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/

[Sep 4, 2018 5:37:37 AM] (xiao) HDFS-13885. Add debug logs in dfsclient around 
decrypting EDEK.
[Sep 4, 2018 3:46:12 PM] (stevel) HADOOP-10219. ipc.Client.setupIOstreams() 
needs to check for
[Sep 4, 2018 6:11:50 PM] (nanda) HDDS-75. Support for CopyContainer. 
Contributed by Elek, Marton.
[Sep 4, 2018 6:41:07 PM] (nanda) HDDS-98. Adding Ozone Manager Audit Log. 
Contributed by Dinesh
[Sep 4, 2018 7:17:17 PM] (inigoiri) HDFS-13857. RBF: Choose to enable the 
default nameservice to read/write
[Sep 4, 2018 9:57:54 PM] (hanishakoneru) HDDS-369. Remove the containers of a 
dead node from the container state
[Sep 4, 2018 11:27:31 PM] (aengineer) HDDS-396. Remove openContainers.db from 
SCM. Contributed by Dinesh
[Sep 5, 2018 12:10:44 AM] (szetszwo) HDDS-383. Ozone Client should discard 
preallocated blocks from closed




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   hadoop.yarn.service.TestServiceAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [60K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/

[Sep 6, 2018 3:53:21 AM] (vrushali) HADOOP-15657 Registering MutableQuantiles 
via Metric annotation.
[Sep 6, 2018 11:16:54 AM] (elek) HDDS-404. Implement toString() in 
OmKeyLocationInfo. Contributed by
[Sep 6, 2018 7:13:29 PM] (jlowe) MAPREDUCE-7131. Job History Server has race 
condition where it moves
[Sep 6, 2018 7:44:08 PM] (aengineer) HDDS-405. User/volume mapping is not 
cleaned up during the deletion of
[Sep 6, 2018 9:35:07 PM] (szetszwo) HDDS-297. Add pipeline actions in Ozone.  
Contributed by Mukul Kumar
[Sep 6, 2018 9:48:00 PM] (gifuma) HDFS-13695. Move logging to slf4j in HDFS 
package. Contributed by Ian
[Sep 6, 2018 10:09:21 PM] (aengineer) HDDS-406. Enable acceptace test of the 
putKey for rpc protocol.
[Sep 6, 2018 11:47:54 PM] (inigoiri) HDFS-13836. RBF: Handle mount table znode 
with null value. Contributed
[Sep 6, 2018 11:58:15 PM] (xyao) HDDS-397. Handle deletion for keys with no 
blocks. Contributed by Lokesh




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-compile-javac-root.txt
  [304K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   

[jira] [Created] (HADOOP-15731) TestDistributedShell fails on Windows

2018-09-07 Thread Botong Huang (JIRA)
Botong Huang created HADOOP-15731:
-

 Summary: TestDistributedShell fails on Windows
 Key: HADOOP-15731
 URL: https://issues.apache.org/jira/browse/HADOOP-15731
 Project: Hadoop Common
  Issue Type: Task
Reporter: Botong Huang
Assignee: Botong Huang


[ERROR] 
testDSShellWithMultipleArgs(org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell)
 Time elapsed: 25.68 s <<< FAILURE!
java.lang.AssertionError
 at org.junit.Assert.fail(Assert.java:86)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at org.junit.Assert.assertTrue(Assert.java:52)
 at 
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.verifyContainerLog(TestDistributedShell.java:1296)

[ERROR] 
testDSShellWithoutDomainV2CustomizedFlow(org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell)
 Time elapsed: 90.021 s <<< ERROR!
java.lang.Exception: test timed out after 9 milliseconds
 at java.lang.Thread.sleep(Native Method)
 at 
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:398)
 at 
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithoutDomainV2CustomizedFlow(TestDistributedShell.java:313)

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15730) Add Ozone submodule to the hadoop.apache.org

2018-09-07 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-15730:
-

 Summary: Add Ozone submodule to the hadoop.apache.org
 Key: HADOOP-15730
 URL: https://issues.apache.org/jira/browse/HADOOP-15730
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


The current hadoop.apache.org doesn't mention Ozone in the "Modules" section.

We can add something like this (or better):

{quote}
Hadoop Ozone is an object store for Hadoop on top of the Hadoop HDDS which 
provides low-level binary storage layer.
{quote}

We can also linke to the http://ozone.hadoop.apache.org




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-07 Thread Elek, Marton

Thanks all the positive feedback.

I just uploaded the new site to the new repository:

https://gitbox.apache.org/repos/asf/hadoop-site.git (asf-site branch)

It contains:

1. Same content, new layout. (source files of the site)

2. The rendered content under /content together with all the javadocs 
(289003 file)


3. The old site (as suggested by Vinod. I added a link back to the old 
site). https://hadoop.apache.org/old


Infra has already changed the pubsub script. The new site is live. 
Please let me know if you see any problem...


I will update the wiki pages / release instruction very soon.

Thanks,
Marton

ps:

Please give me write permission to the OLD wiki 
(https://wiki.apache.org/hadoop/), if you can. My username is MartonElek

Thanks a lot.


On 08/31/2018 10:07 AM, Elek, Marton wrote:

Bumping this thread at last time.

I have the following proposal:

1. I will request a new git repository hadoop-site.git and import the 
new site to there (which has exactly the same content as the existing 
site).


2. I will ask infra to use the new repository as the source of 
hadoop.apache.org


3. I will sync manually all of the changes in the next two months back 
to the svn site from the git (release announcements, new committers)


IN CASE OF ANY PROBLEM we can switch back to the svn without any problem.

If no-one objects within three days, I'll assume lazy consensus and 
start with this plan. Please comment if you have objections.


Again: it allows immediate fallback at any time as svn repo will be kept 
as is (+ I will keep it up-to-date in the next 2 months)


Thanks,
Marton


On 06/21/2018 09:00 PM, Elek, Marton wrote:


Thank you very much to bump up this thread.


About [2]: (Just for the clarification) the content of the proposed 
website is exactly the same as the old one.


About [1]. I believe that the "mvn site" is perfect for the 
documentation but for website creation there are more simple and 
powerful tools.


Hugo has more simple compared to jekyll. Just one binary, without 
dependencies, works everywhere (mac, linux, windows)


Hugo has much more powerful compared to "mvn site". Easier to 
create/use more modern layout/theme, and easier to handle the content 
(for example new release announcements could be generated as part of 
the release process)


I think it's very low risk to try out a new approach for the site (and 
easy to rollback in case of problems)


Marton

ps: I just updated the patch/preview site with the recent releases:

***
* http://hadoop.anzix.net *
***

On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:

Got pinged about this offline.

Thanks for keeping at it, Marton!

I think there are two road-blocks here
  (1) Is the mechanism using which the website is built good enough - 
mvn-site / hugo etc?

  (2) Is the new website good enough?

For (1), I just think we need more committer attention and get 
feedback rapidly and get it in.


For (2), how about we do it in a different way in the interest of 
progress?

  - We create a hadoop.apache.org/new-site/ where this new site goes.
  - We then modify the existing web-site to say that there is a new 
site/experience that folks can click on a link and navigate to
  - As this new website matures and gets feedback & fixes, we finally 
pull the plug at a later point of time when we think we are good to go.


Thoughts?

+Vinod


On Feb 16, 2018, at 3:10 AM, Elek, Marton  wrote:

Hi,

I would like to bump this thread up.

TLDR; There is a proposed version of a new hadoop site which is 
available from here: https://elek.github.io/hadoop-site-proposal/ 
and https://issues.apache.org/jira/browse/HADOOP-14163


Please let me know what you think about it.


Longer version:

This thread started long time ago to use a more modern hadoop site:

Goals were:

1. To make it easier to manage it (the release entries could be 
created by a script as part of the release process)

2. To use a better look-and-feel
3. Move it out from svn to git

I proposed to:

1. Move the existing site to git and generate it with hugo (which is 
a single, standalone binary)

2. Move both the rendered and source branches to git.
3. (Create a jenkins job to generate the site automatically)

NOTE: this is just about forrest based hadoop.apache.org, NOT about 
the documentation which is generated by mvn-site (as before)



I got multiple valuable feedback and I improved the proposed site 
according to the comments. Allen had some concerns about the used 
technologies (hugo vs. mvn-site) and I answered all the questions 
why I think mvn-site is the best for documentation and hugo is best 
for generating site.



I would like to finish this effort/jira: I would like to start a 
discussion about using this proposed version and approach as a new 
site of Apache Hadoop. Please let me know what you think.



Thanks a lot,
Marton

-
To unsubscribe, 

[jira] [Created] (HADOOP-15729) [s3a] stop treat fs.s3a.max.threads as the long-term minimum

2018-09-07 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15729:
--

 Summary: [s3a] stop treat fs.s3a.max.threads as the long-term 
minimum
 Key: HADOOP-15729
 URL: https://issues.apache.org/jira/browse/HADOOP-15729
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


A while ago the s3a connector started experiencing deadlocks because the AWS 
SDK requires an unbounded threadpool. It places monitoring tasks on the work 
queue before the tasks they wait on, so it's possible (has even happened with 
larger-than-default threadpools) for the executor to become permanently 
saturated and deadlock.

So we started giving an unbounded threadpool executor to the SDK, and using a 
bounded, blocking threadpool service for everything else S3A needs (although 
currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then 
only limits this threadpool, however we also specified fs.s3a.max.threads as 
the number of core threads in the unbounded threadpool, which in hindsight is 
pretty terrible.

Currently those core threads do not timeout, so this is actually setting a sort 
of minimum. Once that many tasks have been submitted, the threadpool will be 
locked at that number until it bursts beyond that, but it will only spin down 
that far. If fs.s3a.max.threads is set reasonably high and someone uses a bunch 
of S3 buckets, they could easily have thousands of idle threads constantly.

We should either not use fs.s3a.max.threads for the corepool size and introduce 
a new configuration, or we should simply allow core threads to timeout. I'm 
reading the OpenJDK source now to see what subtle differences there are between 
core threads and other threads if core threads can timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org