[jira] [Created] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13540:


 Summary: DFSStripedInputStream should not allocate new buffers 
during close / unbuffer
 Key: HDFS-13540
 URL: https://issues.apache.org/jira/browse/HDFS-13540
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen


This was found in the same scenario where HDFS-13539 is caught.

There are 2 OOM that looks interesting:
{noformat}
FSDataInputStream#close error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
{noformat}
and 
{noformat}
org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at 
org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
at 
org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
{noformat}

As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
buffer pool. We could save the cost of doing so if it's just a close or 
unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13539:


 Summary: DFSInputStream NPE when reportCheckSumFailure
 Key: HDFS-13539
 URL: https://issues.apache.org/jira/browse/HDFS-13539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen
 Attachments: HDFS-13539.01.patch

We have seem the following exception with DFSStripedInputStream.
{noformat}
readDirect: FSDataInputStream#read error:
NullPointerException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
{noformat}
Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the only 
possible null object.

Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-08 Thread Ajay Kumar
Thanks for work on this, Junping!!

+1 (non-binding)
  - verified binary checksum
- built from source and setup 4 node cluster
- run basic hdfs command
- run wordcount, pi & TestDFSIO (read/write)
- basic check for NN UI

Best,
Ajay

On 5/8/18, 10:41 AM, "俊平堵"  wrote:

Hi all,
 I've created the first release candidate (RC0) for Apache Hadoop
2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
important fixes and improvements.

The RC artifacts are available at:
http://home.apache.org/~junping_du/hadoop-2.8.4-RC0

The RC tag in git is: release-2.8.4-RC0

The maven artifacts are available via repository.apache.org<
http://repository.apache.org> at:
https://repository.apache.org/content/repositories/orgapachehadoop-1118

Please try the release and vote; the vote will run for the usual 5
working days, ending on 5/14/2018 PST time.

Thanks,

Junping



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-34) Remove meta file during creation of container

2018-05-08 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-34:
--

 Summary: Remove meta file during creation of container
 Key: HDDS-34
 URL: https://issues.apache.org/jira/browse/HDDS-34
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


During container creation, a .container and .meta files are created.

.meta file stores container file name and hash. This file is not required.

This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path

2018-05-08 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13537:
-

 Summary: TestHdfsHelper does not generate jceks path properly for 
relative path
 Key: HDFS-13537
 URL: https://issues.apache.org/jira/browse/HDFS-13537
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang


In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
{code:java}
final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
while the path from getTestRootDir() is a relative path (in windows), the 
result will be incorrect due to no "/" between "://file" and the relative path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-05-08 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13536:
-

 Summary: [PROVIDED Storage] HA for InMemoryAliasMap
 Key: HDFS-13536
 URL: https://issues.apache.org/jira/browse/HDFS-13536
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


Provide HA for the {{InMemoryLevelDBAliasMapServer} to work with HDFS NN 
configured in high availability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Apache Hadoop 3.0.3 Release plan

2018-05-08 Thread Vrushali C
+1 for including the YARN-7190 patch in 3.0.3 release. This is a fix that
will enable HBase to use Hadoop 3.0.x in the production line.

thanks
Vrushali


On Tue, May 8, 2018 at 10:24 AM, Yongjun Zhang  wrote:

> Thanks Wei-Chiu and Haibo for the feedback!
>
> Good thing is that I have made the following note couple of days ago when I
> looked the at branch diff, so we are on the same page:
>
>  496dc57 Revert "YARN-7190. Ensure only NM classpath in 2.x gets TSv2
> related hbase jars, not the user classpath. Contributed by Varun Saxena."
>
> *YARN-7190 is not in 3.0.2,  I will include it in 3.0.3 per* the comment
> below:
> https://issues.apache.org/jira/browse/YARN-7190?focusedCommentId=16457649;
> page=com.atlassian.jira.plugin.system.issuetabpanels:
> comment-tabpanel#comment-16457649
>
>
> In addition, I will revert   https://issues.apache.org/
> jira/browse/HADOOP-13055 from 3.0.3 since it's a feature.
>
> Best,
>
> --Yongjun
>
> On Tue, May 8, 2018 at 8:57 AM, Haibo Chen  wrote:
>
> > +1 on adding YARN-7190 to Hadoop 3.0.x despite the fact that it is
> > technically incompatible.
> > It is critical enough to justify being an exception, IMO.
> >
> > Added Rohith and Vrushali
> >
> > On Tue, May 8, 2018 at 6:20 AM, Wei-Chiu Chuang 
> > wrote:
> >
> >> Thanks Yongjun for driving 3.0.3 release!
> >>
> >> IMHO, could we consider adding YARN-7190
> >>  into the list?
> >> I understand that it is listed as an incompatible change, however,
> because
> >> of this bug, HBase considers the entire Hadoop 3.0.x line not production
> >> ready. I feel there's not much point releasing any more 3.0.x releases
> if
> >> downstream projects can't pick it up (after the fact that HBase is one
> of
> >> the most important projects around Hadoop).
> >>
> >> On Mon, May 7, 2018 at 1:19 PM, Yongjun Zhang 
> >> wrote:
> >>
> >> > Hi Eric,
> >> >
> >> > Thanks for the feedback, good point. I will try to clean up things,
> then
> >> > cut branch before the release production and vote.
> >> >
> >> > Best,
> >> >
> >> > --Yongjun
> >> >
> >> > On Mon, May 7, 2018 at 8:39 AM, Eric Payne  >> > invalid
> >> > > wrote:
> >> >
> >> > > >  We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and
> >> vote
> >> > > for RC on May 30th
> >> > > I much prefer to wait to cut the branch until just before the
> >> production
> >> > > of the release and the vote. With so many branches, we sometimes
> miss
> >> > > putting critical bug fixes in unreleased branches if the branch is
> cut
> >> > too
> >> > > early.
> >> > >
> >> > > My 2 cents...
> >> > > Thanks,
> >> > > -Eric Payne
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Monday, May 7, 2018, 12:09:00 AM CDT, Yongjun Zhang <
> >> > > yjzhan...@apache.org> wrote:
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > Hi All,
> >> > >
> >> > > >
> >> > > We have released Apache Hadoop 3.0.2 in April of this year [1].
> Since
> >> > then,
> >> > > there are quite some commits done to branch-3.0. To further improve
> >> the
> >> > > quality of release, we plan to do 3.0.3 release now. The focus of
> >> 3.0.3
> >> > > will be fixing blockers (3), critical bugs (17) and bug fixes
> (~130),
> >> see
> >> > > [2].
> >> > >
> >> > > Usually no new feature should be included for maintenance releases,
> I
> >> > > noticed we have https://issues.apache.org/jira/browse/HADOOP-13055
> in
> >> > the
> >> > > branch classified as new feature. I will talk with the developers to
> >> see
> >> > if
> >> > > we should include it in 3.0.3.
> >> > >
> >> > > I also noticed that there are more commits in the branch than can be
> >> > found
> >> > > by query [2], also some commits committed to 3.0.3 do not have their
> >> jira
> >> > > target release field filled in accordingly. I will go through them
> to
> >> > > update the jira.
> >> > >
> >> > > >
> >> > > We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and
> vote
> >> > for
> >> > > RC on May 30th, targeting for Jun 8th release.
> >> > >
> >> > > >
> >> > > Your insights are welcome.
> >> > >
> >> > > >
> >> > > [1] https://www.mail-archive.com/general@hadoop.apache.org/msg07
> >> 790.html
> >> > >
> >> > > > [2] https://issues.apache.org/jira/issues/?filter=12343874  See
> >> Note
> >> > > below
> >> > > Note: seems I need some admin change so that I can make the filter
> in
> >> [2]
> >> > > public, I'm working on that. For now, you can use jquery
> >> > > (project = hadoop OR project = "Hadoop HDFS" OR project = "Hadoop
> >> YARN"
> >> > OR
> >> > > project = "Hadoop Map/Reduce") AND fixVersion in (3.0.3) ORDER BY
> >> > priority
> >> > > DESC
> >> > >
> >> > > Thanks and best regards,
> >> > >
> >> > > --Yongjun
> >> > >
> >> > > 
> -
> >> > > To unsubscribe, e-mail: 

[VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-08 Thread 俊平堵
Hi all,
 I've created the first release candidate (RC0) for Apache Hadoop
2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
important fixes and improvements.

The RC artifacts are available at:
http://home.apache.org/~junping_du/hadoop-2.8.4-RC0

The RC tag in git is: release-2.8.4-RC0

The maven artifacts are available via repository.apache.org<
http://repository.apache.org> at:
https://repository.apache.org/content/repositories/orgapachehadoop-1118

Please try the release and vote; the vote will run for the usual 5
working days, ending on 5/14/2018 PST time.

Thanks,

Junping


Re: Apache Hadoop 3.0.3 Release plan

2018-05-08 Thread Yongjun Zhang
Thanks Wei-Chiu and Haibo for the feedback!

Good thing is that I have made the following note couple of days ago when I
looked the at branch diff, so we are on the same page:

 496dc57 Revert "YARN-7190. Ensure only NM classpath in 2.x gets TSv2
related hbase jars, not the user classpath. Contributed by Varun Saxena."

*YARN-7190 is not in 3.0.2,  I will include it in 3.0.3 per* the comment
below:
https://issues.apache.org/jira/browse/YARN-7190?focusedCommentId=16457649;
page=com.atlassian.jira.plugin.system.issuetabpanels:
comment-tabpanel#comment-16457649


In addition, I will revert   https://issues.apache.org/
jira/browse/HADOOP-13055 from 3.0.3 since it's a feature.

Best,

--Yongjun

On Tue, May 8, 2018 at 8:57 AM, Haibo Chen  wrote:

> +1 on adding YARN-7190 to Hadoop 3.0.x despite the fact that it is
> technically incompatible.
> It is critical enough to justify being an exception, IMO.
>
> Added Rohith and Vrushali
>
> On Tue, May 8, 2018 at 6:20 AM, Wei-Chiu Chuang 
> wrote:
>
>> Thanks Yongjun for driving 3.0.3 release!
>>
>> IMHO, could we consider adding YARN-7190
>>  into the list?
>> I understand that it is listed as an incompatible change, however, because
>> of this bug, HBase considers the entire Hadoop 3.0.x line not production
>> ready. I feel there's not much point releasing any more 3.0.x releases if
>> downstream projects can't pick it up (after the fact that HBase is one of
>> the most important projects around Hadoop).
>>
>> On Mon, May 7, 2018 at 1:19 PM, Yongjun Zhang 
>> wrote:
>>
>> > Hi Eric,
>> >
>> > Thanks for the feedback, good point. I will try to clean up things, then
>> > cut branch before the release production and vote.
>> >
>> > Best,
>> >
>> > --Yongjun
>> >
>> > On Mon, May 7, 2018 at 8:39 AM, Eric Payne > > invalid
>> > > wrote:
>> >
>> > > >  We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and
>> vote
>> > > for RC on May 30th
>> > > I much prefer to wait to cut the branch until just before the
>> production
>> > > of the release and the vote. With so many branches, we sometimes miss
>> > > putting critical bug fixes in unreleased branches if the branch is cut
>> > too
>> > > early.
>> > >
>> > > My 2 cents...
>> > > Thanks,
>> > > -Eric Payne
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Monday, May 7, 2018, 12:09:00 AM CDT, Yongjun Zhang <
>> > > yjzhan...@apache.org> wrote:
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Hi All,
>> > >
>> > > >
>> > > We have released Apache Hadoop 3.0.2 in April of this year [1]. Since
>> > then,
>> > > there are quite some commits done to branch-3.0. To further improve
>> the
>> > > quality of release, we plan to do 3.0.3 release now. The focus of
>> 3.0.3
>> > > will be fixing blockers (3), critical bugs (17) and bug fixes (~130),
>> see
>> > > [2].
>> > >
>> > > Usually no new feature should be included for maintenance releases, I
>> > > noticed we have https://issues.apache.org/jira/browse/HADOOP-13055 in
>> > the
>> > > branch classified as new feature. I will talk with the developers to
>> see
>> > if
>> > > we should include it in 3.0.3.
>> > >
>> > > I also noticed that there are more commits in the branch than can be
>> > found
>> > > by query [2], also some commits committed to 3.0.3 do not have their
>> jira
>> > > target release field filled in accordingly. I will go through them to
>> > > update the jira.
>> > >
>> > > >
>> > > We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and vote
>> > for
>> > > RC on May 30th, targeting for Jun 8th release.
>> > >
>> > > >
>> > > Your insights are welcome.
>> > >
>> > > >
>> > > [1] https://www.mail-archive.com/general@hadoop.apache.org/msg07
>> 790.html
>> > >
>> > > > [2] https://issues.apache.org/jira/issues/?filter=12343874  See
>> Note
>> > > below
>> > > Note: seems I need some admin change so that I can make the filter in
>> [2]
>> > > public, I'm working on that. For now, you can use jquery
>> > > (project = hadoop OR project = "Hadoop HDFS" OR project = "Hadoop
>> YARN"
>> > OR
>> > > project = "Hadoop Map/Reduce") AND fixVersion in (3.0.3) ORDER BY
>> > priority
>> > > DESC
>> > >
>> > > Thanks and best regards,
>> > >
>> > > --Yongjun
>> > >
>> > > -
>> > > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
>> > > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>> > >
>> > >
>> >
>>
>>
>>
>> --
>> A very happy Hadoop contributor
>>
>
>


[jira] [Created] (HDFS-13535) Fix libhdfs++ doxygen build

2018-05-08 Thread Mitchell Tracy (JIRA)
Mitchell Tracy created HDFS-13535:
-

 Summary: Fix libhdfs++ doxygen build
 Key: HDFS-13535
 URL: https://issues.apache.org/jira/browse/HDFS-13535
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.2
Reporter: Mitchell Tracy


Currently, the doxygen build for libhdfs++ doesn't include all of the necessary 
source directories. In addition, the build does not generate the actual html 
documentation. So the fix is to include all the required source directories 
when generating the doxyfile, and then add maven for generating the html 
documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/

[May 7, 2018 3:33:14 AM] (wwei) YARN-8025. 
UsersManangers#getComputedResourceLimitForActiveUsers throws
[May 7, 2018 10:54:08 AM] (stevel) HADOOP-15446. WASB: PageBlobInputStream.skip 
breaks HBASE replication.
[May 7, 2018 8:32:27 PM] (xiao) Revert "HDFS-13430. Fix 
TestEncryptionZonesWithKMS failure due to
[May 7, 2018 8:32:27 PM] (xiao) Revert "HADOOP-14445. Delegation tokens are not 
shared between KMS
[May 7, 2018 9:58:52 PM] (aengineer) HDDS-1. Remove SCM Block DB. Contributed 
by Xiaoyu Yao.
[May 7, 2018 10:36:29 PM] (xiao) HDFS-12981. renameSnapshot a Non-Existent 
snapshot to itself should
[May 8, 2018 1:29:50 AM] (xyao) HDDS-27. Fix 
TestStorageContainerManager#testBlockDeletionTra;nsactions.




-1 overall


The following subsystems voted -1:
asflicense findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Found reliance on default encoding in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]):in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]): String.getBytes() At MetadataKeyFilters.java:[line 97] 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-hdds_common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [32K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/branch-findbugs-hadoop-tools_hadoop-ozone.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/775/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [192K]
   

[jira] [Created] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-08 Thread James Clampffer (JIRA)
James Clampffer created HDFS-13534:
--

 Summary: libhdfs++: Fix GCC7 build
 Key: HDFS-13534
 URL: https://issues.apache.org/jira/browse/HDFS-13534
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: James Clampffer
Assignee: James Clampffer


After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
[~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
implicitly included functional.  Without that implicit include the compiler 
errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Apache Hadoop 3.0.3 Release plan

2018-05-08 Thread Wei-Chiu Chuang
Thanks Yongjun for driving 3.0.3 release!

IMHO, could we consider adding YARN-7190
 into the list?
I understand that it is listed as an incompatible change, however, because
of this bug, HBase considers the entire Hadoop 3.0.x line not production
ready. I feel there's not much point releasing any more 3.0.x releases if
downstream projects can't pick it up (after the fact that HBase is one of
the most important projects around Hadoop).

On Mon, May 7, 2018 at 1:19 PM, Yongjun Zhang  wrote:

> Hi Eric,
>
> Thanks for the feedback, good point. I will try to clean up things, then
> cut branch before the release production and vote.
>
> Best,
>
> --Yongjun
>
> On Mon, May 7, 2018 at 8:39 AM, Eric Payne  invalid
> > wrote:
>
> > >  We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and vote
> > for RC on May 30th
> > I much prefer to wait to cut the branch until just before the production
> > of the release and the vote. With so many branches, we sometimes miss
> > putting critical bug fixes in unreleased branches if the branch is cut
> too
> > early.
> >
> > My 2 cents...
> > Thanks,
> > -Eric Payne
> >
> >
> >
> >
> >
> > On Monday, May 7, 2018, 12:09:00 AM CDT, Yongjun Zhang <
> > yjzhan...@apache.org> wrote:
> >
> >
> >
> >
> >
> > Hi All,
> >
> > >
> > We have released Apache Hadoop 3.0.2 in April of this year [1]. Since
> then,
> > there are quite some commits done to branch-3.0. To further improve the
> > quality of release, we plan to do 3.0.3 release now. The focus of 3.0.3
> > will be fixing blockers (3), critical bugs (17) and bug fixes (~130), see
> > [2].
> >
> > Usually no new feature should be included for maintenance releases, I
> > noticed we have https://issues.apache.org/jira/browse/HADOOP-13055 in
> the
> > branch classified as new feature. I will talk with the developers to see
> if
> > we should include it in 3.0.3.
> >
> > I also noticed that there are more commits in the branch than can be
> found
> > by query [2], also some commits committed to 3.0.3 do not have their jira
> > target release field filled in accordingly. I will go through them to
> > update the jira.
> >
> > >
> > We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and vote
> for
> > RC on May 30th, targeting for Jun 8th release.
> >
> > >
> > Your insights are welcome.
> >
> > >
> > [1] https://www.mail-archive.com/general@hadoop.apache.org/msg07790.html
> >
> > > [2] https://issues.apache.org/jira/issues/?filter=12343874  See Note
> > below
> > Note: seems I need some admin change so that I can make the filter in [2]
> > public, I'm working on that. For now, you can use jquery
> > (project = hadoop OR project = "Hadoop HDFS" OR project = "Hadoop YARN"
> OR
> > project = "Hadoop Map/Reduce") AND fixVersion in (3.0.3) ORDER BY
> priority
> > DESC
> >
> > Thanks and best regards,
> >
> > --Yongjun
> >
> > -
> > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> >
> >
>



-- 
A very happy Hadoop contributor


Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-08 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/461/

[May 7, 2018 8:32:27 PM] (xiao) Revert "HDFS-13430. Fix 
TestEncryptionZonesWithKMS failure due to
[May 7, 2018 8:32:27 PM] (xiao) Revert "HADOOP-14445. Delegation tokens are not 
shared between KMS
[May 7, 2018 9:58:52 PM] (aengineer) HDDS-1. Remove SCM Block DB. Contributed 
by Xiaoyu Yao.
[May 7, 2018 10:36:29 PM] (xiao) HDFS-12981. renameSnapshot a Non-Existent 
snapshot to itself should
[May 8, 2018 1:29:50 AM] (xyao) HDDS-27. Fix 
TestStorageContainerManager#testBlockDeletionTra;nsactions.
[May 8, 2018 5:05:19 AM] (sunilg) YARN-5151. [UI2] Support kill application 
from new YARN UI. Contributed
[May 8, 2018 6:07:03 AM] (sunilg) YARN-8251. [UI2] Clicking on Application link 
at the header goes to
[May 8, 2018 6:58:54 AM] (rohithsharmaks) YARN-8253. HTTPS Ats v2 api call 
fails with 'bad HTTP parsed'.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.authentication.util.TestZKSignerSecretProvider 
   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.curator.TestChildReaper 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.crypto.key.kms.server.TestKMSWithZK 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestDNFencing 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   

[jira] [Resolved] (HDFS-13437) Ozone : Make BlockId in SCM a long value

2018-05-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-13437.

Resolution: Fixed

This is already taken care as a part of HDDS-1.

> Ozone : Make BlockId in SCM a long value
> 
>
> Key: HDFS-13437
> URL: https://issues.apache.org/jira/browse/HDFS-13437
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-13437.000.patch
>
>
> Currently , when allocation of block happens inside SCM, its assigned a UUID 
> string value. This should be made a Long value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-12) Modify containerProtocol Calls for read, write and delete chunk to datanode to use a "long" blockId key

2018-05-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-12.
-
Resolution: Fixed

This is already taken care as a part of HDDS-1.

> Modify containerProtocol Calls for read, write and delete chunk to datanode 
> to use a "long" blockId key
> ---
>
> Key: HDDS-12
> URL: https://issues.apache.org/jira/browse/HDDS-12
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> HDFS-13437 changes the blockId in SCM to a long value. With respect to this, 
> the container protocol protobuf messages and handlers need to change to use a 
> long blockId value rather than
> a string present currently. This Jira proposes to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13533) Configuration for RBF in namenode/datanode

2018-05-08 Thread Sophie Wang (JIRA)
Sophie Wang created HDFS-13533:
--

 Summary: Configuration for RBF in namenode/datanode
 Key: HDFS-13533
 URL: https://issues.apache.org/jira/browse/HDFS-13533
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sophie Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org