Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/310/

[May 9, 2017 4:22:53 PM] (wang) HADOOP-14386. Rewind trunk from Guava 21.0 back 
to Guava 11.0.2.
[May 9, 2017 5:27:17 PM] (lei) HADOOP-14384. Reduce the visibility of
[May 9, 2017 5:31:52 PM] (cdouglas) AuditLogger and TestAuditLogger are dead 
code. Contributed by Vrushali C
[May 9, 2017 6:18:12 PM] (vrushali) YARN-6563 ConcurrentModificationException 
in TimelineCollectorManager
[May 9, 2017 7:05:46 PM] (templedf) YARN-5301. NM mount cpu cgroups failed on 
some systems (Contributed by
[May 9, 2017 9:08:34 PM] (jlowe) HADOOP-14377. Increase Common test timeouts 
from 1 second to 10 seconds.
[May 9, 2017 9:44:16 PM] (kasha) YARN-3742. YARN RM will shut down if ZKClient 
creation times out.
[May 10, 2017 4:12:57 AM] (haibochen) YARN-6435. [ATSv2] Can't retrieve more 
than 1000 versions of metrics in
[May 10, 2017 4:37:30 AM] (haibochen) YARN-6561. Update exception message 
during timeline collector aux
[May 10, 2017 5:16:41 AM] (iwasakims) HADOOP-14405. Fix performance regression 
due to incorrect use of
[May 10, 2017 10:29:47 AM] (aajisaka) HADOOP-14373. License error in 
org.apache.hadoop.metrics2.util.Servers.
[May 10, 2017 10:57:12 AM] (aajisaka) HADOOP-14400. Fix warnings from spotbugs 
in hadoop-tools. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.ha.TestZKFailoverControllerStress 
   hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.server.namenode.TestProcessCorruptBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.TestDFSUpgrade 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives 
   hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   

Re: About 2.7.4 Release

2017-05-10 Thread Konstantin Shvachko
Hey Akira,

I didn't have private filters. Most probably Jira caches something.
Your filter is in the right direction, but for some reason it lists only 22
issues, while mine has 29.
It misses e.g. YARN-5543 .

Anyways, I created a Jira filter now "Hadoop 2.7.4 release blockers",
shared it with "everybody", and updated my link to point to that filter. So
you can use any of the three methods below to get the correct list:
1. Go to https://s.apache.org/Dzg4
2. Go to the filter via
https://issues.apache.org/jira/issues?filter=12340814
   or by finding "Hadoop 2.7.4 release blockers" filter in the jira
3. On Advanced issues search page paste this:
project in (HDFS, HADOOP, YARN, MAPREDUCE) AND labels = release-blocker AND
"Target Version/s" = 2.7.4

Hope this solves the confusion for which issues are included.
Please LMK if it doesn't, as it is important.

Thanks,
--Konstantin

On Tue, May 9, 2017 at 9:58 AM, Akira Ajisaka  wrote:

> Hi Konstantin,
>
> Thank you for volunteering as release manager!
>
> > Actually the original link works fine: https://s.apache.org/Dzg4
> I couldn't see the link. Maybe is it private filter?
>
> Here is a link I generated: https://s.apache.org/ehKy
> This filter includes resolved issue and excludes fixversion == 2.7.4
>
> Thanks and Regards,
> Akira
>
> On 2017/05/08 19:20, Konstantin Shvachko wrote:
>
>> Hi Brahma Reddy Battula,
>>
>> Actually the original link works fine: https://s.apache.org/Dzg4
>> Your link excludes closed and resolved issues, which needs backporting,
>> and
>> which we cannot reopen, as discussed in this thread earlier.
>>
>> Looked through the issues you proposed:
>>
>> HDFS-9311 
>> Seems like a new feature. It helps failover to standby node when primary
>> is
>> under heavy load, but it introduces new APIs, addresses, config
>> parameters.
>> And needs at least one follow up jira.
>> Looks like a backward compatible change, though.
>> Did you have a chance to run it in production?
>>
>> +1 on
>> HDFS-10987 
>> HDFS-9902 
>> HDFS-8312 
>> HADOOP-14100 
>>
>> Added them to 2.7.4 release. You should see them via the above link now.
>> Would be good if you could attach backport patches for some of them?
>>
>> Appreciate your help,
>> --Konstantin
>>
>> On Mon, May 8, 2017 at 8:39 AM, Brahma Reddy Battula <
>> brahmareddy.batt...@huawei.com> wrote:
>>
>>
>>> Looks following link is not correct..
>>>
>>> https://s.apache.org/Dzg4
>>>
>>> It should be like following..?
>>>
>>> https://s.apache.org/wi3U
>>>
>>>
>>> Apart from Konstantin mentioned,Following also good to go..? let me know
>>> your thoughts on this.
>>>
>>> For Large Cluster:
>>> =
>>>
>>> https://issues.apache.org/jira/browse/HDFS-9311===Life Line Protocol
>>> https://issues.apache.org/jira/browse/HDFS-10987===Deecommission
>>> Expensive when lot's of blocks are present
>>>
>>> https://issues.apache.org/jira/browse/HDFS-9902===
>>> "dfs.datanode.du.reserved"  per Storage Type
>>>
>>> For Security:
>>> =
>>> https://issues.apache.org/jira/browse/HDFS-8312===Trash does not descent
>>> into child directories to check for permission
>>> https://issues.apache.org/jira/browse/HADOOP-14100===Upgrade Jsch jar to
>>> latest version to fix vulnerability in old versions
>>>
>>>
>>>
>>> Regards
>>> Brahma Reddy Battula
>>>
>>> -Original Message-
>>> From: Erik Krogen [mailto:ekro...@linkedin.com.INVALID]
>>> Sent: 06 May 2017 02:40
>>> To: Konstantin Shvachko
>>> Cc: Zhe Zhang; Hadoop Common; Hdfs-dev; mapreduce-dev@hadoop.apache.org;
>>> yarn-...@hadoop.apache.org
>>> Subject: Re: About 2.7.4 Release
>>>
>>> List LGTM Konstantin!
>>>
>>> Let's say that we will only create a new tracking JIRA for patches which
>>> do not backport cleanly, to avoid having too many lying around. Otherwise
>>> we can directly attach to old ticket. If a clean backport does happen to
>>> break a test the nightly build will help us catch it.
>>>
>>> Erik
>>>
>>> On Thu, May 4, 2017 at 7:21 PM, Konstantin Shvachko <
>>> shv.had...@gmail.com>
>>> wrote:
>>>
>>> Great Zhe. Let's monitor the build.

 I marked all jiras I knew of for inclusion into 2.7.4 as I described
 before.
 Target Version/s: 2.7.4
 Label: release-blocker

 Here is the link to the list: https://s.apache.org/Dzg4 Please let me
 know if I missed anything.
 And feel free to pick up any. Most of backports are pretty
 straightforward, but not all.

 We can create tracking jiras for backporting if you need to run
 Jenkins on the patch (and since Allen does not allow reopening them).
 But I think the final patch should be attached to the original jira.
 Otherwise 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/399/

[May 9, 2017 10:37:43 AM] (aajisaka) HADOOP-14374. License error in 
GridmixTestUtils.java. Contributed by
[May 9, 2017 4:22:53 PM] (wang) HADOOP-14386. Rewind trunk from Guava 21.0 back 
to Guava 11.0.2.
[May 9, 2017 5:27:17 PM] (lei) HADOOP-14384. Reduce the visibility of
[May 9, 2017 5:31:52 PM] (cdouglas) AuditLogger and TestAuditLogger are dead 
code. Contributed by Vrushali C
[May 9, 2017 6:18:12 PM] (vrushali) YARN-6563 ConcurrentModificationException 
in TimelineCollectorManager
[May 9, 2017 7:05:46 PM] (templedf) YARN-5301. NM mount cpu cgroups failed on 
some systems (Contributed by
[May 9, 2017 9:08:34 PM] (jlowe) HADOOP-14377. Increase Common test timeouts 
from 1 second to 10 seconds.
[May 9, 2017 9:44:16 PM] (kasha) YARN-3742. YARN RM will shut down if ZKClient 
creation times out.
[May 10, 2017 4:12:57 AM] (haibochen) YARN-6435. [ATSv2] Can't retrieve more 
than 1000 versions of metrics in
[May 10, 2017 4:37:30 AM] (haibochen) YARN-6561. Update exception message 
during timeline collector aux
[May 10, 2017 5:16:41 AM] (iwasakims) HADOOP-14405. Fix performance regression 
due to incorrect use of




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet