答复: Regarding Hadoop Erasure Coding architecture

2018-06-14 Thread Lin,Yiqun(vip.com)
Hi Chaitanya,

I suppose you can get more details that you wanted to know from EC design doc 
attached in JIRA HDFS-7285(https://issues.apache.org/jira/browse/HDFS-7285). 
Hope this makes sense to you.

Thanks
Yiqun

-邮件原件-
发件人: Chaitanya M V S [mailto:chaitanya.mvs2...@gmail.com]
发送时间: 2018年6月14日 21:02
收件人: hdfs-dev@hadoop.apache.org
抄送: Shreya Gupta
主题: Regarding Hadoop Erasure Coding architecture

Hi!

We a group of people trying to understand the architecture of erasure coding in 
Hadoop 3.0. We have been facing difficulties to understand few terms and 
concepts regarding the same.

1. What do the terms Block, Block Group, Stripe, Cell and Chunk mean in the 
context of erasure coding (these terms have taken different meanings and have 
been used interchangeably over various documentation and blogs)? How has this 
been incorporated in reading and writing of EC data?

2. How has been the idea/concept of the block from previous versions carried 
over to EC?

3. ‎The higher level APIs, that of ErasureCoders and ErasureCodec still hasn't 
been plugged into Hadoop. Also, I haven't found any new Jira regarding the 
same. Can I know if there are any updates or pointers regarding the 
incorporation of these APIs into Hadoop?

4. How is the datanode for reconstruction work chosen?  Also, how are the 
buffer sizes for the reconstruction work determined?


Thanks in advance for your time and considerations.

Regards,
M.V.S.Chaitanya
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-169:
---

 Summary: Add Volume IO Stats 
 Key: HDDS-169
 URL: https://issues.apache.org/jira/browse/HDDS-169
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


This Jira is used to add VolumeIO stats in the datanode.

During writeChunk, readChunk, deleteChunk add IO calculations for each 
operation like 

readBytes, readOpCount, writeBytes, writeOpCount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13682:


 Summary: Cannot create encryption zone after KMS auth token expires
 Key: HDFS-13682
 URL: https://issues.apache.org/jira/browse/HDFS-13682
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, namenode
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen
 Attachments: HDFS-13682.dirty.repro.patch

Our internal testing reported this behavior recently.
{noformat}
[root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt /cdep/keytabs/hdfs.keytab 
hdfs -l 30d -r 30d
[root@nightly6x-1 ~]# sudo -u hdfs klist
Ticket cache: FILE:/tmp/krb5cc_994
Default principal: h...@gce.cloudera.com

Valid starting   Expires  Service principal
06/12/2018 03:24:09  07/12/2018 03:24:09  
krbtgt/gce.cloudera@gce.cloudera.com
[root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 -path 
/user/systest/ez
RemoteException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
{noformat}

Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
cannot authenticate with the server after the authentication token (which is 
cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-168) Add ScmGroupID to Datanode Version File

2018-06-14 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-168:
---

 Summary: Add ScmGroupID to Datanode Version File
 Key: HDDS-168
 URL: https://issues.apache.org/jira/browse/HDDS-168
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru


Add the field {{ScmGroupID}} to Datanode Version file. This field identifies 
the set of SCMs that this datanode talks to, or takes commands from.

This value is not same as Cluster ID – since a cluster can technically have 
more than one SCM group.

Refer to [~anu]'s 
[comment|https://issues.apache.org/jira/browse/HDDS-156?focusedCommentId=16511903=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511903]
 in HDDS-156.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-14 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13681:
-

 Summary: Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test 
failure on Windows
 Key: HDFS-13681
 URL: https://issues.apache.org/jira/browse/HDFS-13681
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Liang
Assignee: Xiao Liang


org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
 fails on Windows with below error message:

NN dir should be created after NN startup. 
expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
 but 
was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>

due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-167:
--

 Summary: Rename KeySpaceManager to OzoneManager
 Key: HDDS-167
 URL: https://issues.apache.org/jira/browse/HDDS-167
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some more 
changes needed to complete the rename everywhere e.g.
- command-line
- documentation
- unit tests
- Acceptance tests




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13680) Httpfs does not support custom authentication

2018-06-14 Thread Joris Nogneng (JIRA)
Joris Nogneng created HDFS-13680:


 Summary: Httpfs does not support custom authentication
 Key: HDFS-13680
 URL: https://issues.apache.org/jira/browse/HDFS-13680
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Reporter: Joris Nogneng


Currently Httpfs Authentication Filter does not support any custom 
authentication: the Authentication Handler can only be 
PseudoAuthenticationHandler or KerberosDelegationTokenAuthenticationHandler.

We should allow other authentication handlers to manage custom authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/

[Jun 13, 2018 1:28:05 AM] (wwei) YARN-8394. Improve data locality documentation 
for Capacity Scheduler.
[Jun 13, 2018 7:36:02 AM] (jitendra) HADOOP-15483. Upgrade jquery to version 
3.3.1. Contributed by Lokesh
[Jun 13, 2018 10:39:16 AM] (sunilg) YARN-8404. Timeline event publish need to 
be async to avoid Dispatcher
[Jun 13, 2018 12:05:55 PM] (yqlin) HDFS-13641. Add metrics for edit log 
tailing. Contributed by Chao Sun.
[Jun 13, 2018 4:50:10 PM] (aengineer) HDDS-109. Add reconnect logic for 
XceiverClientGrpc. Contributed by
[Jun 13, 2018 6:43:18 PM] (xyao) HDDS-159. RestClient: Implement list 
operations for volume, bucket and
[Jun 13, 2018 11:05:52 PM] (eyang) YARN-8411.  Restart stopped system service 
during RM start.
[Jun 13, 2018 11:24:31 PM] (eyang) YARN-8259.  Improve privileged docker 
container liveliness checks.  
[Jun 14, 2018 1:48:59 AM] (aengineer) HDDS-161. Add functionality to queue 
ContainerClose command from SCM
[Jun 14, 2018 3:18:22 AM] (aengineer) HDDS-163. Add Datanode heartbeat 
dispatcher in SCM. Contributed by
[Jun 14, 2018 7:08:10 AM] (rohithsharmaks) YARN-8155. Improve ATSv2 client 
logging in RM and NM publisher.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen shadedclient unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
   Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time Unsynchronized access at 
AllocationFileLoaderService.java:75% of time Unsynchronized access at 
AllocationFileLoaderService.java:[line 117] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.mapred.TestMRTimelineEventHandling 

Failed TAP tests :

   hadoop_stop_daemon.bats.tap 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-compile-javac-root.txt
  [348K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/811/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [60K]
   

[jira] [Created] (HDDS-166) Create a landing page for Ozone

2018-06-14 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-166:
-

 Summary: Create a landing page for Ozone
 Key: HDDS-166
 URL: https://issues.apache.org/jira/browse/HDDS-166
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: document
Reporter: Elek, Marton
Assignee: Elek, Marton


As Ozone release cycle is seprated from hadoop we need a separated page to 
publish the releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Regarding Hadoop Erasure Coding architecture

2018-06-14 Thread Chaitanya M V S
Hi!

We a group of people trying to understand the architecture of erasure
coding in Hadoop 3.0. We have been facing difficulties to understand few
terms and concepts regarding the same.

1. What do the terms Block, Block Group, Stripe, Cell and Chunk mean in the
context of erasure coding (these terms have taken different meanings and
have been used interchangeably over various documentation and blogs)? How
has this been incorporated in reading and writing of EC data?

2. How has been the idea/concept of the block from previous versions
carried over to EC?

3. ‎The higher level APIs, that of ErasureCoders and ErasureCodec still
hasn't been plugged into Hadoop. Also, I haven't found any new Jira
regarding the same. Can I know if there are any updates or pointers
regarding the incorporation of these APIs into Hadoop?

4. How is the datanode for reconstruction work chosen?  Also, how are the
buffer sizes for the reconstruction work determined?


Thanks in advance for your time and considerations.

Regards,
M.V.S.Chaitanya


[jira] [Created] (HDDS-165) Add unit test for HddsDatanodeService

2018-06-14 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-165:


 Summary: Add unit test for HddsDatanodeService
 Key: HDDS-165
 URL: https://issues.apache.org/jira/browse/HDDS-165
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Nanda kumar
 Fix For: 0.2.1


We have to add unit-test for {{HddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-164) Add unit test for HddsDatanodeService

2018-06-14 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-164:


 Summary: Add unit test for HddsDatanodeService
 Key: HDDS-164
 URL: https://issues.apache.org/jira/browse/HDDS-164
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Nanda kumar
 Fix For: 0.2.1


We have to add unit-test for {{HddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13679) Fix Typo in javadoc for ScanInfoPerBlockPool#addAll

2018-06-14 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13679:
--

 Summary: Fix Typo in javadoc for ScanInfoPerBlockPool#addAll
 Key: HDFS-13679
 URL: https://issues.apache.org/jira/browse/HDFS-13679
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13678) StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions

2018-06-14 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13678:


 Summary: StorageType is incompatible when rolling upgrade to 
2.6/2.6+ versions
 Key: HDFS-13678
 URL: https://issues.apache.org/jira/browse/HDFS-13678
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rolling upgrades
Affects Versions: 2.5.0
Reporter: Yiqun Lin


In version 2.6.0, we supported more storage types in HDFS that implemented in 
HDFS-6584. But this seems a incompatible change when we rolling upgrade our 
cluster from 2.5.0 to 2.6.0 and throw following error.
{noformat}
2018-06-14 11:43:39,246 ERROR [DataNode: 
[[[DISK]file:/home/vipshop/hard_disk/dfs/, [DISK]file:/data1/dfs/, 
[DISK]file:/data2/dfs/]] heartbeating to xx.xx.xx.xx:8022] 
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
for Block pool BP-670256553-xx.xx.xx.xx-1528795419404 (Datanode Uuid 
ab150e05-fcb7-49ed-b8ba-f05c27593fee) service to xx.xx.xx.xx:8022
java.lang.ArrayStoreException
 at java.util.ArrayList.toArray(ArrayList.java:412)
 at java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1034)
 at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1030)
 at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:836)
 at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:146)
 at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:566)
 at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:664)
 at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:835)
 at java.lang.Thread.run(Thread.java:748)
{noformat}
The scenery is that new version NN parses StorageType error that sent from old 
version DN. This is trigger by \{{DNA_TRANSFER}} commands, that is say, if the 
there are under-replicate blocks, then the error appears.

The convert logic is here:
{code:java}
  public static BlockCommand convert(BlockCommandProto blkCmd) {
List blockProtoList = blkCmd.getBlocksList();
Block[] blocks = new Block[blockProtoList.size()];
...

StorageType[][] targetStorageTypes = new StorageType[targetList.size()][];
List targetStorageTypesList = 
blkCmd.getTargetStorageTypesList();
if (targetStorageTypesList.isEmpty()) { // missing storage types
  for(int i = 0; i < targetStorageTypes.length; i++) {
targetStorageTypes[i] = new StorageType[targets[i].length];
Arrays.fill(targetStorageTypes[i], StorageType.DEFAULT);
  }
} else {
  for(int i = 0; i < targetStorageTypes.length; i++) {
List p = 
targetStorageTypesList.get(i).getStorageTypesList();
targetStorageTypes[i] = p.toArray(new StorageType[p.size()]);   <=== 
should do the try-catch 
  }
}
{code}
A easy fix is that we do the try-catch and fallback to use the default storage 
type when parsed error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-14 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/497/

[Jun 12, 2018 3:06:23 PM] (msingh) HDDS-158. DatanodeStateMachine endPoint task 
throws
[Jun 12, 2018 3:25:18 PM] (aengineer) HDDS-111. Include tests for Rest Client 
in TestVolume and TestBucket.
[Jun 12, 2018 3:35:37 PM] (aajisaka) YARN-8363. Upgrade commons-lang version to 
3.7 in hadoop-yarn-project.
[Jun 12, 2018 4:03:55 PM] (xyao) HDDS-130. 
TestGenerateOzoneRequiredConfigurations should use
[Jun 12, 2018 5:03:34 PM] (haibochen) YARN-6931. Make the aggregation interval 
in AppLevelTimelineCollector
[Jun 12, 2018 5:11:30 PM] (haibochen) YARN-8325. Miscellaneous QueueManager 
code clean up. (Szilard Nemeth via
[Jun 12, 2018 5:24:34 PM] (inigoiri) HADOOP-15529. 
ContainerLaunch#testInvalidEnvVariableSubstitutionType is
[Jun 12, 2018 5:59:50 PM] (inigoiri) YARN-8422. TestAMSimulator failing with 
NPE. Contributed by Giovanni
[Jun 12, 2018 6:16:24 PM] (xiao) HADOOP-15307. NFS: flavor AUTH_SYS should use 
VerifierNone. Contributed
[Jun 12, 2018 6:21:51 PM] (gera) MAPREDUCE-7108. TestFileOutputCommitter fails 
on Windows. (Zuoming Zhang
[Jun 12, 2018 9:16:14 PM] (inigoiri) HADOOP-15532. TestBasicDiskValidator fails 
with NoSuchFileException.
[Jun 12, 2018 10:36:52 PM] (arun suresh) MAPREDUCE-7101. Add config parameter 
to allow JHS to alway scan user dir
[Jun 13, 2018 12:40:32 AM] (eyang) HADOOP-15527.  Improve delay check for 
stopping processes.  
[Jun 13, 2018 1:28:05 AM] (wwei) YARN-8394. Improve data locality documentation 
for Capacity Scheduler.
[Jun 13, 2018 7:36:02 AM] (jitendra) HADOOP-15483. Upgrade jquery to version 
3.3.1. Contributed by Lokesh
[Jun 13, 2018 10:39:16 AM] (sunilg) YARN-8404. Timeline event publish need to 
be async to avoid Dispatcher
[Jun 13, 2018 12:05:55 PM] (yqlin) HDFS-13641. Add metrics for edit log 
tailing. Contributed by Chao Sun.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.compress.TestCodec 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileConcurrentReader 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestLeaseRecoveryStriped 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.tools.TestDFSAdminWithHA 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch