[jira] [Resolved] (HDFS-8095) Allow to configure the system default EC schema

2017-01-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-8095.
---
Resolution: Not A Problem

Resolving per above comments, since we think that this can be handled by a 
combination of HDFS-7859 and HDFS-11314. Thanks [~drankye] for the discussion!

> Allow to configure the system default EC schema
> ---
>
> Key: HDFS-8095
> URL: https://issues.apache.org/jira/browse/HDFS-8095
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-nice-to-have
>
> As suggested by [~umamaheswararao] and [~vinayrpet] in HDFS-8074, we may 
> desire allowing to configure the system default EC schema, so in any 
> deployment a cluster admin may be able to define their own system default 
> one. In the discussion, we have two approaches to configure the system 
> default schema: 1) predefine it in the {{ecschema-def.xml}} file, making sure 
> it's not changed; 2) configure the key parameter values as properties in 
> {{core-site.xml}}. Open this for future consideration in case it's forgotten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-01-10 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11314:
--

 Summary: Validate client-provided EC schema on the NameNode
 Key: HDFS-11314
 URL: https://issues.apache.org/jira/browse/HDFS-11314
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


Filing based on discussion in HDFS-8095. A user might specify a policy that is 
not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster only 
has 10 nodes. The NN should only allow the client to choose from a pre-approved 
list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10258) Erasure Coding: support small cluster whose #DataNode < # (Blocks in a BlockGroup)

2017-01-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-10258.

Resolution: Later

We also committed the XOR 2,1 policy, so I think the priority of this JIRA is 
lessened. We can revisit if small clusters are found to be important later.

> Erasure Coding: support small cluster whose #DataNode < # (Blocks in a 
> BlockGroup)
> --
>
> Key: HDFS-10258
> URL: https://issues.apache.org/jira/browse/HDFS-10258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
>
> Currently EC has not supported small clusters whose datanode number is 
> smaller than the block numbers in a block group. This sub task will solve 
> this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11313) Segmented Block Reports

2017-01-10 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-11313:
--

 Summary: Segmented Block Reports
 Key: HDFS-11313
 URL: https://issues.apache.org/jira/browse/HDFS-11313
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.6.2
Reporter: Konstantin Shvachko


Block reports from a single DataNode can be currently split into multiple RPCs 
each reporting a single DataNode storage (disk). The reports are still large 
since disks are getting bigger. Splitting blockReport RPCs into multiple 
smaller calls would improve NameNode performance and overall HDFS stability.
This was discussed in multiple jiras. Here the approach is to let NameNode 
divide blockID space into segments and then ask DataNodes to report replicas in 
a particular range of IDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8796) Erasure coding: merge HDFS-8499 to EC branch and refactor BlockInfoStriped

2017-01-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-8796.
---
Resolution: Invalid

I think this JIRA is invalid now given that the EC branch has been merged to 
trunk, resolving.

> Erasure coding: merge HDFS-8499 to EC branch and refactor BlockInfoStriped
> --
>
> Key: HDFS-8796
> URL: https://issues.apache.org/jira/browse/HDFS-8796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8796-HDFS-7285.00.patch, 
> HDFS-8796-HDFS-7285.01-part1.patch, HDFS-8796-HDFS-7285.01-part2.patch
>
>
> Separating this change from the HDFS-8728 discussion. Per suggestion from 
> [~szetszwo], clarifying the description of the change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7674) [umbrella] Adding metrics for Erasure Coding

2017-01-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-7674.
---
Resolution: Done

I think we can close this since the subtasks are resolved. Thanks everyone for 
the hard work!

> [umbrella] Adding metrics for Erasure Coding
> 
>
> Key: HDFS-7674
> URL: https://issues.apache.org/jira/browse/HDFS-7674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Li Bo
>
> As the design (in HDFS-7285) indicates, erasure coding involves non-trivial 
> impact and workload for NameNode, DataNode and client; it also allows 
> configurable and pluggable erasure codec and schema with flexible tradeoff 
> options (see HDFS-7337). To support necessary analysis and adjustment, we'd 
> better have various meaningful metrics for the EC support, like 
> encoding/decoding tasks, recovered blocks, read/transferred data size, 
> computation time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11312) Discrepancy in nonDfsUsed index in protobuf

2017-01-10 Thread Sean Mackrory (JIRA)
Sean Mackrory created HDFS-11312:


 Summary: Discrepancy in nonDfsUsed index in protobuf
 Key: HDFS-11312
 URL: https://issues.apache.org/jira/browse/HDFS-11312
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory
Priority: Minor


The patches for HDFS-9038 had a discrepancy between trunk and branch-2.7: in 
one message type, nonDfsUsed is given 2 different indices. This is a minor wire 
incompatibility that is easy to fix...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8498) Blocks can be committed with wrong size

2017-01-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao reopened HDFS-8498:
-

> Blocks can be committed with wrong size
> ---
>
> Key: HDFS-8498
> URL: https://issues.apache.org/jira/browse/HDFS-8498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> When an IBR for a UC block arrives, the NN updates the expected location's 
> block and replica state _only_ if it's on an unexpected storage for an 
> expected DN.  If it's for an expected storage, only the genstamp is updated.  
> When the block is committed, and the expected locations are verified, only 
> the genstamp is checked.  The size is not checked but it wasn't updated in 
> the expected locations anyway.
> A faulty client may misreport the size when committing the block.  The block 
> is effectively corrupted.  If the NN issues replications, the received IBR is 
> considered corrupt, the NN invalidates the block, immediately issues another 
> replication.  The NN eventually realizes all the original replicas are 
> corrupt after full BRs are received from the original DNs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11308) NameNode doFence state judgment problem

2017-01-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-11308.

Resolution: Duplicate

Thanks [~tangshangwen] and [~iwasakims] for filing the jira and commenting. 
Let's resolve this one as a dup of HDFS-3618, since a patch is pending there.

> NameNode doFence state judgment problem
> ---
>
> Key: HDFS-11308
> URL: https://issues.apache.org/jira/browse/HDFS-11308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.1
> Environment: CentOS Linux release 7.1.1503 (Core)
>Reporter: tangshangwen
>
> In our Cluster, I found some abnormal in ZKFC log
> {noformat}
> [2017-01-10T01:42:37.168+08:00] [INFO] 
> hadoop.ha.SshFenceByTcpPort.doFence(SshFenceByTcpPort.java 147) [Health 
> Monitor for NameNode at 
> xxx-xxx-172xxx.hadoop.xxx.com/xxx.xxx.172.xxx:8021-EventThread] : 
> Indeterminate response from trying to kill service. Verifying whether it is 
> running using nc...
> [2017-01-10T01:42:37.234+08:00] [WARN] 
> hadoop.ha.SshFenceByTcpPort.pump(StreamPumper.java 88) [nc -z 
> xxx-xxx-172xx.hadoop.xx.com 8021 via ssh: StreamPumper for STDERR] : nc -z 
> xxx-xxx-172xx.hadoop.xxx.com 8021 via ssh: nc: invalid option -- 'z'
> [2017-01-10T01:42:37.235+08:00] [WARN] 
> hadoop.ha.SshFenceByTcpPort.pump(StreamPumper.java 88) [nc -z 
> xxx-xxx-172xx.hadoop.xxx.com 8021 via ssh: StreamPumper for STDERR] : nc -z 
> xxx-xxx-17224.hadoop.xxx.com 8021 via ssh: Ncat: Try `--help' or man(1) ncat 
> for more information, usage options and help. QUITTING.
> {noformat}
> When I perform nc an exception occurs, the return value is 2, and cannot 
> confirm sshfence success,this may lead to some problems
> {code:title=SshFenceByTcpPort.java|borderStyle=solid}
> rc = execCommand(session, "nc -z " + serviceAddr.getHostName() +
> " " + serviceAddr.getPort());
> if (rc == 0) {
>   // the service is still listening - we are unable to fence
>   LOG.warn("Unable to fence - it is running but we cannot kill it");
>   return false;
> } else {
>   LOG.info("Verified that the service is down.");
>   return true;  
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-01-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/

[Jan 9, 2017 4:40:39 PM] (wangda) YARN-3955. Support for application priority 
ACLs in queues of
[Jan 9, 2017 6:32:18 PM] (arp) HDFS-11301. Double wrapping over 
RandomAccessFile in
[Jan 9, 2017 11:18:26 PM] (weichiu) HADOOP-13953. Make FTPFileSystem's data 
connection mode and transfer
[Jan 9, 2017 11:44:42 PM] (yzhang) HDFS-11292. log lastWrittenTxId etc info in 
logSyncAll. Contributed by
[Jan 10, 2017 2:01:37 AM] (wang) HADOOP-13885. Implement getLinkTarget for 
ViewFileSystem. Contributed by
[Jan 10, 2017 2:05:33 AM] (jing9) HDFS-11273. Move TransferFsImage#doGetUrl 
function to a Util class.
[Jan 10, 2017 2:14:46 AM] (junping_du) YARN-4148. When killing app, RM releases 
app's resource before they are
[Jan 10, 2017 6:12:58 AM] (templedf) YARN-6073. Misuse of format specifier in 
Preconditions.checkArgument
[Jan 10, 2017 8:38:01 AM] (sunilg) YARN-5899. Debug log in 
AbstractCSQueue#canAssignToThisQueue needs
[Jan 10, 2017 10:05:01 AM] (naganarasimha_gr) YARN-5937. stop-yarn.sh is not 
able to gracefully stop node managers.
[Jan 10, 2017 10:24:16 AM] (naganarasimha_gr) YARN-6054. TimelineServer fails 
to start when some LevelDb state files
[Jan 10, 2017 11:37:58 AM] (lei) HDFS-11259. Update fsck to display maintenance 
state info. (Manoj




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-compile-root.txt
  [124K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-compile-root.txt
  [124K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-compile-root.txt
  [124K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [200K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/213/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   

[jira] [Created] (HDFS-11311) HDFS fsck continues to report all blocks present when DataNode is restarted with empty data directories

2017-01-10 Thread JIRA
André Frimberger created HDFS-11311:
---

 Summary: HDFS fsck continues to report all blocks present when 
DataNode is restarted with empty data directories
 Key: HDFS-11311
 URL: https://issues.apache.org/jira/browse/HDFS-11311
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-alpha1, 2.7.3
Reporter: André Frimberger


During cluster maintenance, we had to change parameters of the underlying disk 
filesystem and we stopped the DataNode, reformatted all of its data directories 
and started the DataNode again in under 10 minutes with no data and only the 
{{VERSION}} file present. Running fsck afterwards reports that all blocks are 
fully replicated, which does not reflect the true state of HDFS. If an 
administrator trusts {{fsck}} and continues to replace further DataNodes, *data 
will be lost!*

Steps to reproduce:
1. Shutdown DataNode
2. Remove all BlockPools from all data directories (only {{VERSION}} file is 
present)
3. Startup DataNode in under 10.5 minutes
4. Run {{hdfs fsck /}}

*Actual result:* Average replication is falsely shown as 3.0
*Expected result:* Average replication factor is < 3.0

*Workaround:* Trigger a block report with {{hdfs dfsadmin -triggerBlockReport 
$dn_host:$ipc_port}}

*Cause:* The first block report is handled differently by NameNode and only 
added blocks are respected. This behaviour was introduced in HDFS-7980 for 
performance reasons. But is applied too widely and in our case data can be lost.

*Fix:* We suggest using stricter conditions on applying 
{{processFirstBlockReport}} in {{BlockManager:processReport()}}:
Change
{code}
if (storageInfo.getBlockReportCount() == 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

to

{code}
if (storageInfo.getBlockReportCount() == 0 && storageInfo.getState() != 
State.FAILED && storageInfo.numBlocks() > 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

In case the DataNode reports no blocks for a data directory, it might be a new 
DataNode or the data directory may have been emptied for whatever reason 
(offline replacement of storage, reformatting of data disk, etc.). In either 
case, the changes should be reflected in the output of {{fsck}} in less than 6 
hours to prevent data loss due to misleading output.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-01-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/

[Jan 9, 2017 1:24:22 PM] (varunsaxena) YARN-6074. FlowRunEntity does not 
deserialize long values correctly
[Jan 9, 2017 4:40:39 PM] (wangda) YARN-3955. Support for application priority 
ACLs in queues of
[Jan 9, 2017 6:32:18 PM] (arp) HDFS-11301. Double wrapping over 
RandomAccessFile in
[Jan 9, 2017 11:18:26 PM] (weichiu) HADOOP-13953. Make FTPFileSystem's data 
connection mode and transfer
[Jan 9, 2017 11:44:42 PM] (yzhang) HDFS-11292. log lastWrittenTxId etc info in 
logSyncAll. Contributed by
[Jan 10, 2017 2:01:37 AM] (wang) HADOOP-13885. Implement getLinkTarget for 
ViewFileSystem. Contributed by
[Jan 10, 2017 2:05:33 AM] (jing9) HDFS-11273. Move TransferFsImage#doGetUrl 
function to a Util class.
[Jan 10, 2017 2:14:46 AM] (junping_du) YARN-4148. When killing app, RM releases 
app's resource before they are
[Jan 10, 2017 6:12:58 AM] (templedf) YARN-6073. Misuse of format specifier in 
Preconditions.checkArgument




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/282/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-11310) Reduce the performance impact of the balancer (trunk port)

2017-01-10 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-11310:
--

 Summary: Reduce the performance impact of the balancer (trunk port)
 Key: HDFS-11310
 URL: https://issues.apache.org/jira/browse/HDFS-11310
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, namenode
Affects Versions: 3.0.0-alpha1
Reporter: Daryn Sharp
Priority: Critical


HDFS-7967 introduced a highly performant balancer getBlocks() query that scales 
to large/dense clusters.  The simple design implementation depends on the 
triplets data structure.  HDFS-9260 removed the triplets which fundamentally 
changes the implementation.  Either that patch must be reverted or the 
getBlocks() patch needs reimplementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org