[jira] [Resolved] (HADOOP-16648) HDFS Native Client does not build correctly

2019-10-10 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan resolved HADOOP-16648.
---
Resolution: Duplicate

Marking this as dup of HDFS-14900

> HDFS Native Client does not build correctly
> ---
>
> Key: HADOOP-16648
> URL: https://issues.apache.org/jira/browse/HADOOP-16648
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Priority: Blocker
>
> Builds are failing in PR with following exception in native client.  
> {noformat}
> [WARNING] make[2]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
>   2 3 4 5 6 7 8 9 10 11
> [WARNING] [ 28%] Built target common_obj
> [WARNING] make[2]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
>   31
> [WARNING] [ 28%] Built target gmock_main_obj
> [WARNING] make[1]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [  0.301 
> s]
> [INFO] Apache Hadoop Build Tools .. SUCCESS [  1.348 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.501 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  1.391 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  0.115 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.168 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  4.490 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.773 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  7.922 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  1.381 
> s]
> [INFO] Apache Hadoop Common ... SUCCESS [ 34.562 
> s]
> [INFO] Apache Hadoop NFS .. SUCCESS [  5.583 
> s]
> [INFO] Apache Hadoop KMS .. SUCCESS [  5.931 
> s]
> [INFO] Apache Hadoop Registry . SUCCESS [  5.816 
> s]
> [INFO] Apache Hadoop Common Project ... SUCCESS [  0.056 
> s]
> [INFO] Apache Hadoop HDFS Client .. SUCCESS [ 27.104 
> s]
> [INFO] Apache Hadoop HDFS . SUCCESS [ 42.065 
> s]
> [INFO] Apache Hadoop HDFS Native Client ... FAILURE [ 19.349 
> s]
> {noformat}
> Creating this ticket, as couple of pull requests had the same issue.
> e.g 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/1/artifact/out/patch-compile-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/471/

[Oct 10, 2019 4:07:18 PM] (xkrogen) HDFS-14162. [SBN read] Allow Balancer to 
work with Observer node. Add a
[Oct 10, 2019 4:09:50 PM] (ekrogen) HDFS-14245. [SBN read] Enable 
ObserverReadProxyProvider to work with
[Oct 10, 2019 8:29:30 PM] (cliang) HDFS-14509. DN throws InvalidToken due to 
inequality of password when

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [DISCUSS] Hadoop 2.10.0 release plan

2019-10-10 Thread Jonathan Hung
Hi folks, as of now all 2.10.0 blockers have been resolved [1]. So I'll
start the release process soon (cutting branches, updating target versions,
etc).

[1] https://issues.apache.org/jira/issues/?filter=12346975

Jonathan Hung


On Mon, Aug 26, 2019 at 10:19 AM Jonathan Hung  wrote:

> Hi folks,
>
> As discussed previously (e.g. [1], [2]) we'd like to do a 2.10.0 release
> soon. Some features/big-items we're targeting for this release:
>
>- YARN resource types/GPU support (YARN-8200
>)
>- Selective wire encryption (HDFS-13541
>)
>- Rolling upgrade support from 2.x to 3.x (e.g. HDFS-14509
>)
>
> Per [3] sounds like there's concern around upgrading dependencies as well.
>
> We created a public jira filter here (
> https://issues.apache.org/jira/issues/?filter=12346975) marking all
> blockers for 2.10.0 release. If you have other jiras that should be 2.10.0
> blockers, please mark "Target Version/s" as "2.10.0" and add label
> "release-blocker" so we can track it through this filter.
>
> We're targeting a release at end of September.
>
> Please share any thoughts you have about this. Thanks!
>
> [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29461.html
> [2]
> https://www.mail-archive.com/mapreduce-dev@hadoop.apache.org/msg21293.html
> [3] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg33440.html
>
>
> Jonathan Hung
>


[jira] [Resolved] (HADOOP-16650) ITestS3AClosedFS failing -junit test thread

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16650.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> ITestS3AClosedFS failing -junit test thread
> ---
>
> Key: HADOOP-16650
> URL: https://issues.apache.org/jira/browse/HADOOP-16650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
>
> The new thread leak test in HADOOP-16570 is failing for me in test runs; need 
> to strip out all Junit-* threads for the filter to be reliable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1285/

[Oct 9, 2019 5:58:47 AM] (shashikant) HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place
[Oct 9, 2019 10:23:14 AM] (sunilg) YARN-9873. Mutation API Config Change need 
to update Version Number.
[Oct 9, 2019 11:09:09 AM] (snemeth) YARN-9356. Add more tests to ratio method 
in TestResourceCalculator.
[Oct 9, 2019 11:26:26 AM] (snemeth) YARN-9128. Use SerializationUtils from 
apache commons to serialize /
[Oct 9, 2019 1:46:16 PM] (elek) HDDS-2217. Remove log4j and audit configuration 
from the docker-config
[Oct 9, 2019 1:51:00 PM] (elek) HDDS-2217. Remove log4j and audit configuration 
from the docker-config
[Oct 9, 2019 2:16:44 PM] (elek) Squashed commit of the following:
[Oct 9, 2019 2:17:40 PM] (elek) HDDS-2265. integration.sh may report false 
negative
[Oct 9, 2019 5:50:28 PM] (surendralilhore) HDFS-14754. Erasure Coding : The 
number of Under-Replicated Blocks never




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

FindBugs :

   module:hadoop-ozone/csi 
   Useless control flow in 
csi.v1.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 15977] 
   Class csi.v1.Csi$ControllerExpandVolumeRequest defines non-transient 
non-serializable instance field secrets_ In Csi.java:instance field secrets_ In 
Csi.java 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 50408] 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: A

[jira] [Created] (HADOOP-16650) ITestS3AClosedFS failing -junit test thread

2019-10-10 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16650:
---

 Summary: ITestS3AClosedFS failing -junit test thread
 Key: HADOOP-16650
 URL: https://issues.apache.org/jira/browse/HADOOP-16650
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The new thread leak test in HADOOP-16570 is failing for me in test runs; need 
to strip out all Junit-* threads for the filter to be reliable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/

[Oct 9, 2019 11:23:25 PM] (dazhou) HADOOP-16578 : Avoid FileSystem API calls 
when FileSystem already exists
[Oct 9, 2019 11:50:06 PM] (dazhou) HADOOP-16630 : Backport of Hadoop-16548 : 
Disable Flush() over config




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-h

[GitHub] [hadoop-thirdparty] ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-10-10 Thread GitBox
ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r333463878
 
 

 ##
 File path: hadoop-shaded-protobuf37/pom.xml
 ##
 @@ -0,0 +1,110 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  
+hadoop-thirdparty
+org.apache.hadoop.thirdparty
+1.0.0-SNAPSHOT
+..
+  
+  4.0.0
+  hadoop-shaded-protobuf37
 
 Review comment:
   Well I am ok with 37 but 37 is itself a whole number doesn't directly 
indicate 3.7 may be can we have something like 3_7?
   if that sounds better to you, otherwise I am still Ok with this also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-10-10 Thread GitBox
ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r333461646
 
 

 ##
 File path: NOTICE-binary
 ##
 @@ -0,0 +1,840 @@
+Apache Hadoop
+Copyright 2006 and onwards The Apache Software Foundation.
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+Export Control Notice
+-
+
+This distribution includes cryptographic software.  The country in
+which you currently reside may have restrictions on the import,
+possession, use, and/or re-export to another country, of
+encryption software.  BEFORE using any encryption software, please
+check your country's laws, regulations and policies concerning the
+import, possession, or use, and re-export of encryption software, to
+see if this is permitted.  See  for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and
+Security (BIS), has classified this software as Export Commodity
+Control Number (ECCN) 5D002.C.1, which includes information security
+software using or performing cryptographic functions with asymmetric
+algorithms.  The form and manner of this Apache Software Foundation
+distribution makes it eligible for export under the License Exception
+ENC Technology Software Unrestricted (TSU) exception (see the BIS
+Export Administration Regulations, Section 740.13) for both object
+code and source code.
+
+The following provides more details on the included cryptographic software:
+
+This software uses the SSL libraries from the Jetty project written
+by mortbay.org.
+Hadoop Yarn Server Web Proxy uses the BouncyCastle Java
+cryptography APIs written by the Legion of the Bouncy Castle Inc.
+
+// --
+// NOTICE file corresponding to the section 4d of The Apache License,
+// Version 2.0, in this case for
+// --
+
+
+Apache Yetus
+Copyright 2008-2017 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+---
+Additional licenses for the Apache Yetus Source/Website:
+---
+
+
+See LICENSE for terms.
+
+
+
+Apache Avro
+Copyright 2010 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+C JSON parsing provided by Jansson and
+written by Petri Lehtinen. The original software is
+available from http://www.digip.org/jansson/.
+
+
+AWS SDK for Java
+Copyright 2010-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+
+This product includes software developed by
+Amazon Technologies, Inc (http://www.amazon.com/).
+
+**
+THIRD PARTY COMPONENTS
+**
+This software includes third party software subject to the following 
copyrights:
+- XML parsing and utility functions from JetS3t - Copyright 2006-2009 James 
Murty.
+- PKCS#1 PEM encoded private key parsing and utility functions from 
oauth.googlecode.com - Copyright 1998-2010 AOL Inc.
+
+The licenses for these third party components are included in LICENSE.txt
+
+
+Apache Commons BeanUtils
+Copyright 2000-2016 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+
+Apache Commons CLI
+Copyright 2001-2009 The Apache Software Foundation
+
+This product includes software developed by
+The Apache Software Foundation (http://www.apache.org/).
+
+
+Apache Commons Codec
+Copyright 2002-2017 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
+contains test data from http://aspell.net/test/orig/batch0.tab.
+Copyright (C) 2002 Kevin Atkinson (kev...@gnu.org)
+
+===
+
+The content of package org.apache.commons.codec.language.bm has been translated
+from the original php source code available at 
http://stevemorse.org/phoneticinfo.htm
+with permission from the original authors.
+Original source copyright:
+Copyright (c) 2008 Alexander Beider & Stephen P. Morse.
+
+
+Apache Commons Collections
+Copyright 2001-2018 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+
+Apache Commons Compress
+Copyright 2002-2018 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (https://www.apache.org/).
+
+The files in the package org.apache.commons.compress.archivers.sevenz
+were derived from the LZMA SDK, version 9.20 (C/ and CPP/7zip/),
+which has been placed 

[GitHub] [hadoop-thirdparty] ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-10-10 Thread GitBox
ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r333460910
 
 

 ##
 File path: .gitignore
 ##
 @@ -0,0 +1,6 @@
+.idea
+**/target/*
+*.patch
+*.iml
+patchprocess
+**/dependency-reduced-pom.xml
 
 Review comment:
   can we add .project, .classpath and .settings?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-10-10 Thread GitBox
ayushtkn commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r333461008
 
 

 ##
 File path: LICENSE.txt
 ##
 @@ -0,0 +1,224 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is prvided in the Appendix below).
 
 Review comment:
   typo *provided


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Tom Lous (Jira)
Tom Lous created HADOOP-16649:
-

 Summary: Defining hadoop-azure and hadoop-azure-datalake in 
HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure
 Key: HADOOP-16649
 URL: https://issues.apache.org/jira/browse/HADOOP-16649
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, fs
Affects Versions: 3.2.1
 Environment: Shell, but it also trickles down into all code using 
`FileSystem` 
Reporter: Tom Lous


When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get ignored.

eg setting this:

HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"

 

 with debug on:

 

DEBUG: Profiles: importing 
/opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure

 

whereas:

 

HADOOP_OPTIONAL_TOOLS="hadoop-azure"

 

 with debug on:


DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Ozone 0.4.1-alpha

2019-10-10 Thread Elek, Marton



+1

Thank you Nanda the enormous work to make this release happen.



 * GPG Signatures are fine
 * SHA512 signatures are fine
 * Can be built from the source package (in isolated environment 
without cached hadoop/ozone artifacts)

 * Started the pseudo cluster with `compose/ozone`
 * Executed FULL smoke-test suite (`cd compose && ./test-all.sh`) ALL 
passed except some intermittent issues:
   * kinit step was failed due to timeout but after that all the secure 
testss are passed. I think my laptop was too slow... + I had other CPU 
sensitive tasks in the mean time

 * Tested to create apache/hadoop-ozone:0.4.1 image
 * Using hadoop-docker-ozone/Dockerfile [1]
 * Started single, one node cluster + tested with AWS cli 
(REDUCED_REDUNDANCY) (`docker run elek/ozone:test`)
 * Started pseudo cluster (`docker run elek/ozone:test cat 
docker-compose.yaml && docker run elek/ozone:test cat docker-config`)

 * Tested with kubernetes:
   * Used the image which is created earlier
   * Replaced the images under kubernetes/examples/minikube
   * Started with kubectl `kubectl apply -f` to k3s (3!) cluster
   * Tested with `ozone sh` commands (put/get keys)


Marton

[1]:
```
docker build --build-arg 
OZONE_URL=https://home.apache.org/~nanda/ozone/release/0.4.1/RC0/hadoop-ozone-0.4.1-alpha.tar.gz 
-t elek/ozone-test .

```

On 10/4/19 7:42 PM, Nanda kumar wrote:

Hi Folks,

I have put together RC0 for Apache Hadoop Ozone 0.4.1-alpha.

The artifacts are at:
https://home.apache.org/~nanda/ozone/release/0.4.1/RC0/

The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1238/

The RC tag in git is at:
https://github.com/apache/hadoop/tree/ozone-0.4.1-alpha-RC0

And the public key used for signing the artifacts can be found at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

This release contains 363 fixes/improvements [1].
Thanks to everyone who put in the effort to make this happen.

*The vote will run for 7 days, ending on October 11th at 11:59 pm IST.*
Note: This release is alpha quality, it’s not recommended to use in
production but we believe that it’s stable enough to try out the feature
set and collect feedback.


[1] https://s.apache.org/yfudc

Thanks,
Team Ozone



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16648) HDFS Native Client does not build correctly

2019-10-10 Thread Rajesh Balamohan (Jira)
Rajesh Balamohan created HADOOP-16648:
-

 Summary: HDFS Native Client does not build correctly
 Key: HADOOP-16648
 URL: https://issues.apache.org/jira/browse/HADOOP-16648
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Rajesh Balamohan


Builds are failing in PR with following exception in native client.  

{noformat}
[WARNING] make[2]: Leaving directory 
'/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
[WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
  2 3 4 5 6 7 8 9 10 11
[WARNING] [ 28%] Built target common_obj
[WARNING] make[2]: Leaving directory 
'/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
[WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
  31
[WARNING] [ 28%] Built target gmock_main_obj
[WARNING] make[1]: Leaving directory 
'/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
[WARNING] Makefile:127: recipe for target 'all' failed
[WARNING] make[2]: *** No rule to make target 
'/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
 needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'.  
Stop.
[WARNING] make[1]: *** 
[main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
[WARNING] make[1]: *** Waiting for unfinished jobs
[WARNING] make: *** [all] Error 2
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main . SUCCESS [  0.301 s]
[INFO] Apache Hadoop Build Tools .. SUCCESS [  1.348 s]
[INFO] Apache Hadoop Project POM .. SUCCESS [  0.501 s]
[INFO] Apache Hadoop Annotations .. SUCCESS [  1.391 s]
[INFO] Apache Hadoop Project Dist POM . SUCCESS [  0.115 s]
[INFO] Apache Hadoop Assemblies ... SUCCESS [  0.168 s]
[INFO] Apache Hadoop Maven Plugins  SUCCESS [  4.490 s]
[INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.773 s]
[INFO] Apache Hadoop Auth . SUCCESS [  7.922 s]
[INFO] Apache Hadoop Auth Examples  SUCCESS [  1.381 s]
[INFO] Apache Hadoop Common ... SUCCESS [ 34.562 s]
[INFO] Apache Hadoop NFS .. SUCCESS [  5.583 s]
[INFO] Apache Hadoop KMS .. SUCCESS [  5.931 s]
[INFO] Apache Hadoop Registry . SUCCESS [  5.816 s]
[INFO] Apache Hadoop Common Project ... SUCCESS [  0.056 s]
[INFO] Apache Hadoop HDFS Client .. SUCCESS [ 27.104 s]
[INFO] Apache Hadoop HDFS . SUCCESS [ 42.065 s]
[INFO] Apache Hadoop HDFS Native Client ... FAILURE [ 19.349 s]
{noformat}

Creating this ticket, as couple of pull requests had the same issue.

e.g 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt
https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/1/artifact/out/patch-compile-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org