Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/

[May 11, 2017 6:03:45 PM] (brahma) HADOOP-14410. Correct spelling of  
'beginning' and variants. Contributed
[May 11, 2017 7:06:06 PM] (templedf) HADOOP-14413. Add Javadoc comment for 
jitter parameter on
[May 11, 2017 8:25:31 PM] (shv) YARN-5543. ResourceManager SchedulingMonitor 
could potentially terminate
[May 11, 2017 8:47:02 PM] (templedf) YARN-6380. FSAppAttempt keeps redundant 
copy of the queue
[May 11, 2017 9:09:49 PM] (wang) HDFS-11757. Query StreamCapabilities when 
creating balancer's lock file.
[May 11, 2017 9:37:32 PM] (aajisaka) HADOOP-14401. 
maven-project-info-reports-plugin can be removed.
[May 12, 2017 2:08:18 AM] (vinayakumarb) HDFS-11674. reserveSpaceForReplicas is 
not released if append request




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverControllerStress 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 

Timed out junit tests :

   org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [688K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/312/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   

Re: About 2.7.4 Release

2017-05-12 Thread Konstantin Shvachko
Latest update on the links and filters. Here is the correct link for the
filter:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?requestId=12340814

Also updated: https://s.apache.org/Dzg4

Had to do some Jira debugging. Sorry for confusion.

Thanks,
--Konstantin

On Wed, May 10, 2017 at 2:30 PM, Konstantin Shvachko 
wrote:

> Hey Akira,
>
> I didn't have private filters. Most probably Jira caches something.
> Your filter is in the right direction, but for some reason it lists only
> 22 issues, while mine has 29.
> It misses e.g. YARN-5543 
> .
>
> Anyways, I created a Jira filter now "Hadoop 2.7.4 release blockers",
> shared it with "everybody", and updated my link to point to that filter. So
> you can use any of the three methods below to get the correct list:
> 1. Go to https://s.apache.org/Dzg4
> 2. Go to the filter via
> https://issues.apache.org/jira/issues?filter=12340814
>or by finding "Hadoop 2.7.4 release blockers" filter in the jira
> 3. On Advanced issues search page paste this:
> project in (HDFS, HADOOP, YARN, MAPREDUCE) AND labels = release-blocker
> AND "Target Version/s" = 2.7.4
>
> Hope this solves the confusion for which issues are included.
> Please LMK if it doesn't, as it is important.
>
> Thanks,
> --Konstantin
>
> On Tue, May 9, 2017 at 9:58 AM, Akira Ajisaka  wrote:
>
>> Hi Konstantin,
>>
>> Thank you for volunteering as release manager!
>>
>> > Actually the original link works fine: https://s.apache.org/Dzg4
>> I couldn't see the link. Maybe is it private filter?
>>
>> Here is a link I generated: https://s.apache.org/ehKy
>> This filter includes resolved issue and excludes fixversion == 2.7.4
>>
>> Thanks and Regards,
>> Akira
>>
>> On 2017/05/08 19:20, Konstantin Shvachko wrote:
>>
>>> Hi Brahma Reddy Battula,
>>>
>>> Actually the original link works fine: https://s.apache.org/Dzg4
>>> Your link excludes closed and resolved issues, which needs backporting,
>>> and
>>> which we cannot reopen, as discussed in this thread earlier.
>>>
>>> Looked through the issues you proposed:
>>>
>>> HDFS-9311 
>>> Seems like a new feature. It helps failover to standby node when primary
>>> is
>>> under heavy load, but it introduces new APIs, addresses, config
>>> parameters.
>>> And needs at least one follow up jira.
>>> Looks like a backward compatible change, though.
>>> Did you have a chance to run it in production?
>>>
>>> +1 on
>>> HDFS-10987 
>>> HDFS-9902 
>>> HDFS-8312 
>>> HADOOP-14100 
>>>
>>> Added them to 2.7.4 release. You should see them via the above link now.
>>> Would be good if you could attach backport patches for some of them?
>>>
>>> Appreciate your help,
>>> --Konstantin
>>>
>>> On Mon, May 8, 2017 at 8:39 AM, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>>
 Looks following link is not correct..

 https://s.apache.org/Dzg4

 It should be like following..?

 https://s.apache.org/wi3U


 Apart from Konstantin mentioned,Following also good to go..? let me know
 your thoughts on this.

 For Large Cluster:
 =

 https://issues.apache.org/jira/browse/HDFS-9311===Life Line Protocol
 https://issues.apache.org/jira/browse/HDFS-10987===Deecommission
 Expensive when lot's of blocks are present

 https://issues.apache.org/jira/browse/HDFS-9902===
 "dfs.datanode.du.reserved"  per Storage Type

 For Security:
 =
 https://issues.apache.org/jira/browse/HDFS-8312===Trash does not
 descent
 into child directories to check for permission
 https://issues.apache.org/jira/browse/HADOOP-14100===Upgrade Jsch jar
 to
 latest version to fix vulnerability in old versions



 Regards
 Brahma Reddy Battula

 -Original Message-
 From: Erik Krogen [mailto:ekro...@linkedin.com.INVALID]
 Sent: 06 May 2017 02:40
 To: Konstantin Shvachko
 Cc: Zhe Zhang; Hadoop Common; Hdfs-dev; mapreduce-dev@hadoop.apache.org
 ;
 yarn-...@hadoop.apache.org
 Subject: Re: About 2.7.4 Release

 List LGTM Konstantin!

 Let's say that we will only create a new tracking JIRA for patches which
 do not backport cleanly, to avoid having too many lying around.
 Otherwise
 we can directly attach to old ticket. If a clean backport does happen to
 break a test the nightly build will help us catch it.

 Erik

 On Thu, May 4, 2017 at 7:21 PM, Konstantin Shvachko <
 shv.had...@gmail.com>
 wrote:

 Great Zhe. Let's monitor the build.
>
> I marked all jiras I knew of for inclusion into 2.7.4 as 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/401/

[May 11, 2017 6:03:45 PM] (brahma) HADOOP-14410. Correct spelling of  
'beginning' and variants. Contributed
[May 11, 2017 7:06:06 PM] (templedf) HADOOP-14413. Add Javadoc comment for 
jitter parameter on
[May 11, 2017 8:25:31 PM] (shv) YARN-5543. ResourceManager SchedulingMonitor 
could potentially terminate
[May 11, 2017 8:47:02 PM] (templedf) YARN-6380. FSAppAttempt keeps redundant 
copy of the queue
[May 11, 2017 9:09:49 PM] (wang) HDFS-11757. Query StreamCapabilities when 
creating balancer's lock file.
[May 11, 2017 9:37:32 PM] (aajisaka) HADOOP-14401. 
maven-project-info-reports-plugin can be removed.
[May 12, 2017 2:08:18 AM] (vinayakumarb) HDFS-11674. reserveSpaceForReplicas is 
not released if append request




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   

[jira] [Created] (MAPREDUCE-6888) Error message of ShuffleHandler should show the exact cause

2017-05-12 Thread Kai Sasaki (JIRA)
Kai Sasaki created MAPREDUCE-6888:
-

 Summary: Error message of ShuffleHandler should show the exact 
cause
 Key: MAPREDUCE-6888
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6888
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor


{{exceptionCaught}} shows the exact cause of given {{ExceptionEvent}}. But it 
might be properly shown in case of internal server error.
{code}
  LOG.error("Shuffle error: ", cause);
  if (ch.isConnected()) {
LOG.error("Shuffle error " + e);
sendError(ctx, INTERNAL_SERVER_ERROR);
  }
{code



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org