[jira] [Created] (HDFS-15137) Move RDBStore logic from apache-ozone into hadoop-commons module of apache-hadoop

2020-01-21 Thread maobaolong (Jira)
maobaolong created HDFS-15137:
-

 Summary: Move RDBStore logic from apache-ozone into hadoop-commons 
module of apache-hadoop
 Key: HDFS-15137
 URL: https://issues.apache.org/jira/browse/HDFS-15137
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: maobaolong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.1.4 Release Plan Proposal

2020-01-21 Thread Wei-Chiu Chuang
Sounds good to me. Thanks for doing this!

On Mon, Jan 20, 2020 at 4:04 AM Gabor Bota  wrote:

> Hi All,
>
> Based on the discussion on the topic "Hadoop 2019 Release Planning" I
> volunteer to do the next 3.1 Hadoop release, version 3.1.4.
>
> You can find the blocker/critical issues and all issues targeted for 3.1.4
> under the following cwiki page:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.1+Release
>
> Right now there are 217 total fixed issues[1] and 6 blocker/critical
> targeted[2] for the release. If you have any other commits you want to
> backport to 3.1 branch until the feature or code freeze, please add 3.1.4
> as the target release in jira.
>
>
> My proposed timeline for the release is the following:
> * Feature Freeze Date: Wednesday, 11 March 2020
> * Code Freeze Date: Wednesday, 22 April 2020
> * Release Date: Wednesday, 29 April 2020
>
>
> Please let me know if you have any suggestions.
>
> Regards,
> Gabor Bota
>
>
>
>
> References:
> [1] project in (HADOOP, MAPREDUCE, HDFS, YARN) AND priority in (Blocker,
> Critical) AND "Target Version/s" = 3.1.4 ORDER BY priority DESC, updated
> DESC
> [2] project in (HADOOP, MAPREDUCE, HDFS, YARN) AND fixVersion = 3.1.4
> ORDER BY priority DESC, updated DESC
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1388/

[Jan 21, 2020 1:58:32 AM] (aajisaka) HADOOP-16753. Refactor HAAdmin. 
Contributed by Xieming Li.
[Jan 21, 2020 9:03:24 AM] (aajisaka) HADOOP-16808. Use forkCount and reuseForks 
parameters instead of
[Jan 21, 2020 9:22:44 AM] (iwasakims) Remove WARN log when ipc connection 
interrupted in
[Jan 21, 2020 4:37:51 PM] (stevel) HADOOP-16346. Stabilize S3A OpenSSL support.
[Jan 21, 2020 9:22:53 PM] (inigoiri) HDFS-15126. 
TestDatanodeRegistration#testForcedRegistration fails
[Jan 21, 2020 9:29:20 PM] (inigoiri) HDFS-15092. 
TestRedudantBlocks#testProcessOverReplicatedAndRedudantBlock
[Jan 21, 2020 9:41:01 PM] (inigoiri) YARN-9768. RM Renew Delegation token 
thread should timeout and retry.
[Jan 21, 2020 10:31:51 PM] (liuml07) HADOOP-16759. Filesystem openFile() 
builder to take a FileStatus param
[Jan 22, 2020 1:45:17 AM] (inigoiri) Revert "YARN-9768. RM Renew Delegation 
token thread should timeout and

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-15135) EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped.

2020-01-21 Thread Surendra Singh Lilhore (Jira)
Surendra Singh Lilhore created HDFS-15135:
-

 Summary: EC : ArrayIndexOutOfBoundsException in 
BlockRecoveryWorker#RecoveryTaskStriped.
 Key: HDFS-15135
 URL: https://issues.apache.org/jira/browse/HDFS-15135
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Surendra Singh Lilhore


{noformat}
java.lang.ArrayIndexOutOfBoundsException: 8 at 
org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskStriped.recover(BlockRecoveryWorker.java:464)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:602)
 at java.lang.Thread.run(Thread.java:745) {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15136) In secure mode when Cookies are not set in request header leads to exception flood in DEBUG log

2020-01-21 Thread Renukaprasad C (Jira)
Renukaprasad C created HDFS-15136:
-

 Summary: In secure mode when Cookies are not set in request header 
leads to exception flood in DEBUG log
 Key: HDFS-15136
 URL: https://issues.apache.org/jira/browse/HDFS-15136
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Renukaprasad C
Assignee: Renukaprasad C


In debug mode below exception gets logged when Cookie is not set in the request 
header. This exception stack gets repeated and and has no meaning here. 

Instead, log the error in debug mode and continue without throw/catch/log of 
exception.

2020-01-20 18:25:57,792 DEBUG org.apache.hadoop.security.UserGroupInformation: 
PrivilegedAction as:test/t...@hadoop.com (auth:KERBEROS) 
from:org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:518)
2020-01-20 18:25:57,792 DEBUG org.apache.hadoop.hdfs.web.URLConnectionFactory: 
open AuthenticatedURL connection 
https://IP:PORT/getJournal?jid=hacluster=295=-64%3A39449123%3A1579244618105%3Amyhacluster=true
2020-01-20 18:25:57,803 DEBUG 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator: JDK 
performed authentication on our behalf.
2020-01-20 18:25:57,803 DEBUG 
org.apache.hadoop.security.authentication.client.AuthenticatedURL: Cannot parse 
cookie header:
java.lang.IllegalArgumentException: Empty cookie header string
at java.net.HttpCookie.parseInternal(HttpCookie.java:826)
at java.net.HttpCookie.parse(HttpCookie.java:202)
at java.net.HttpCookie.parse(HttpCookie.java:178)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL$AuthCookieHandler.put(AuthenticatedURL.java:99)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:390)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:197)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:186)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:470)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:518)
at 
org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:512)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:463)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:157)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:208)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:266)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:198)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:198)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:253)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:925)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:773)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1119)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:732)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:638)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:700)

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/573/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  

Call for presentations for ApacheCon North America 2020 now open

2020-01-21 Thread Rich Bowen

Dear Apache enthusiast,

(You’re receiving this message because you are subscribed to one or more 
project mailing lists at the Apache Software Foundation.)


The call for presentations for ApacheCon North America 2020 is now open 
at https://apachecon.com/acna2020/cfp


ApacheCon will be held at the Sheraton, New Orleans, September 28th 
through October 2nd, 2020.


As in past years, ApacheCon will feature tracks focusing on the various 
technologies within the Apache ecosystem, and so the call for 
presentations will ask you to select one of those tracks, or “General” 
if the content falls outside of one of our already-organized tracks. 
These tracks are:


Karaf
Internet of Things
Fineract
Community
Content Delivery
Solr/Lucene (Search)
Gobblin/Big Data Integration
Ignite
Observability
Cloudstack
Geospatial
Graph
Camel/Integration
Flagon
Tomcat
Cassandra
Groovy
Web/httpd
General/Other

The CFP will close Friday, May 1, 2020 8:00 AM (America/New_York time).

Submit early, submit often, at https://apachecon.com/acna2020/cfp

Rich, for the ApacheCon Planners

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1387/

[Jan 20, 2020 8:28:23 AM] (snemeth) YARN-8148. Update decimal values for queue 
capacities shown on queue
[Jan 20, 2020 8:41:06 AM] (snemeth) YARN-10081. Exception message from 
ClientRMProxy#getRMAddress is
[Jan 20, 2020 8:54:22 AM] (snemeth) YARN-9462. 
TestResourceTrackerService.testNodeRemovalGracefully fails
[Jan 20, 2020 11:36:55 AM] (snemeth) YARN-9525. IFile format is not working 
against s3a remote folder.
[Jan 20, 2020 12:10:32 PM] (snemeth) YARN-7913. Improve error handling when 
application recovery fails with
[Jan 20, 2020 4:23:41 PM] (stevel) HADOOP-16785. followup to abfs close() fix.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown