[jira] [Commented] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121788#comment-16121788
 ] 

Hadoop QA commented on HDFS-12286:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 11 new + 3 unchanged - 0 fixed = 14 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12286 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881219/HDFS-12286-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cab73f2bafa1 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0e32bf1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20635/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20635/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20635/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend MBeans utility to add any key value pairs to the registered MXBeans
> --
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton

[jira] [Commented] (HDFS-12285) Better handling of namenode ip address change

2017-08-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121607#comment-16121607
 ] 

Rushabh S Shah commented on HDFS-12285:
---

Is this somehow related to HADOOP-12125 or HDFS-8068 ?

> Better handling of namenode ip address change
> -
>
> Key: HDFS-12285
> URL: https://issues.apache.org/jira/browse/HDFS-12285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>
> RPC client layer provides functionality to detect ip address change:
> {noformat}
> Client.java
> private synchronized boolean updateAddress() throws IOException {
>   // Do a fresh lookup with the old host name.
>   InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
>server.getHostName(), server.getPort());
> ..
> }
> {noformat}
> To use this feature, we need to enable retry via 
> {{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} 
> RetryPolicy will be used; which caused {{handleConnectionFailure}} to throw 
> {{ConnectException}} exception without retrying with the new ip address.
> {noformat}
> private void handleConnectionFailure(int curRetries, IOException ioe
> ) throws IOException {
>   closeConnection();
>   final RetryAction action;
>   try {
> action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
>   } catch(Exception e) {
> throw e instanceof IOException? (IOException)e: new IOException(e);
>   }
>   ..
>   }
> {noformat}
> However, using such configuration isn't ideal. What happens is DFSClient 
> still holds onto the cached old ip address created by {{namenode = 
> proxyInfo.getProxy();}}. Thus when a new rpc connection is created, it starts 
> with the old ip followed by retry with the new ip. It will be nice if 
> DFSClient can update namenode proxy automatically upon ip address change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement block deleting service to delete stale blocks at background

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122895#comment-16122895
 ] 

Hadoop QA commented on HDFS-12196:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
23s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 48s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Thread passed where Runnable expected in 
org.apache.hadoop.utils.BackgroundService.start()  At BackgroundService.java:in 
org.apache.hadoop.utils.BackgroundService.start()  At 
BackgroundService.java:[line 85] |
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12196 |

[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122878#comment-16122878
 ] 

Vinayakumar B commented on HDFS-11738:
--

I see that, Following changes in 
{{DFSInputStream#hedgedFetchBlockByteRange(..)}} present in both HDFS-11303 and 
this jira, 
{code}
 futures.add(firstRequest);
+Future future = null;
 try {
-  Future future = hedgedService.poll(
+  future = hedgedService.poll(
   conf.getHedgedReadThresholdMillis(), TimeUnit.MILLISECONDS);
   if (future != null) {
 ByteBuffer result = future.get();
@@ -1142,16 +1143,18 @@ private void hedgedFetchBlockByteRange(LocatedBlock 
block, long start,
   }
   DFSClient.LOG.debug("Waited {}ms to read from {}; spawning hedged "
   + "read", conf.getHedgedReadThresholdMillis(), chosenNode.info);
-  // Ignore this node on next go around.
-  ignored.add(chosenNode.info);
   dfsClient.getHedgedReadMetrics().incHedgedReadOps();
   // continue; no need to refresh block locations
 } catch (ExecutionException e) {
-  // Ignore
+  futures.remove(future);
 } catch (InterruptedException e) {
   throw new InterruptedIOException(
   "Interrupted while waiting for reading task");
 }
+// Ignore this node on next go around.
+// If poll timeout and the request still ongoing, don't consider it
+// again. If read data failed, don't consider it either.
+ignored.add(chosenNode.info);
   } else {
 // We are starting up a 'hedged' read. We have a read already
 // ongoing. Call getBestNodeDNAddrPair instead of chooseDataNode.
{code}
I think its fair to commit HDFS-11303 first and give credit for [~jzhuge]'s 
efforts, as it has associated test written along with the change.

I can update the patch again once HDFS-11303 committed. Anyway test in my patch 
will fail even after HDFS-11303 is committed. i.e. HDFS-11303 is not exactly 
the fix for this issue.
So lets get HDFS-11303 committed first.

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122889#comment-16122889
 ] 

John Zhuge commented on HDFS-11738:
---

[~zhangchen] is the original author. [~jojochuang] and I just tried to help 
move it forward.

I am ok with [~vinayrpet]'s plan since [~zhangchen]'s work in DFSInputStream 
and the new unit test deserves the credit. 

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12287) Remove a no-longer applicable TODO comment in DatanodeManager

2017-08-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122900#comment-16122900
 ] 

Yiqun Lin commented on HDFS-12287:
--

+1, will commit this shortly.

> Remove a no-longer applicable TODO comment in DatanodeManager
> -
>
> Key: HDFS-12287
> URL: https://issues.apache.org/jira/browse/HDFS-12287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12287.001.patch
>
>
> {{DatanodeManager}} has this this TODO comment
> {code}
> // TODO: Enables DFSNetworkTopology by default after more stress
> // testings/validations.
> {code}
> This has been resolved in HDFS-11998, but it missed removing this comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12274) Ozone: Corona: move corona from test to tools package

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121149#comment-16121149
 ] 

Hadoop QA commented on HDFS-12274:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
7s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12274 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881136/HDFS-12274-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  |
| uname | Linux b4b7f05baa26 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121110#comment-16121110
 ] 

Kai Zheng commented on HDFS-11882:
--

Hi [~ajisakaa],

I'm trying to understand what you meant.
bq. When sentBytes is 18*64k and the cellSize is 64k, 
The used schema should be 6 + 3. It sent 2 full strips of 2 x 9 x 64k bytes.
bq. DN1~8 will have two 64k data blocks, DN9~10 will have one 64k data block, 
and DN11~14 will have two 64k parity blocks. 
In the schema of 6 + 3, it only needs to write to 9 DNs. But here it looks like 
to write DN1~14 or 14 DNs, why? I'm pretty confused here.
bq. In this situation, getNumAckedStriples() will return 2 if DN9 and DN10 are 
failing. 
OK, if it return 2 that means the acked strips number is 2, so the acked bytes 
should be 2 x 9 x 64k, which should be right equal to the sent bytes.
bq. That way, in the testcase ackedBytes will become 20*64k, which is greater 
than sentBytes.
Looks like 2 extra cells in addition to the 2 full strips also acked, since DN9 
and DN10 are failing, they shouldn't contribute to the 2 extra acked cells.

I'm also trying to understand the root cause, why the acked bytes are greater 
than the sent bytes. Cloud you help explain a little bit for me? Thanks!

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Attachment: HDFS-12054.002.patch

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Description: In the process of FSNamesystem#addErasureCodingPolicies, it 
would be better to  call checkNameNodeSafeMode() to ensure NN is not in 
safemode.  (was: In the process of FSNamesystem#addECPolicies, it would be 
better to  call checkNameNodeSafeMode() to ensure NN is not in safemode.)
Summary: FSNamesystem#addErasureCodingPolicies should call 
checkNameNodeSafeMode() to ensure Namenode is not in safemode  (was: 
FSNamesystem#addECPolicies should call checkNameNodeSafeMode() to ensure 
Namenode is not in safemode)

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Attachment: (was: HDFS-12054.002.patch)

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12274) Ozone: Corona: move corona from test to tools package

2017-08-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12274:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

I have committed this to the feature branch, thanks for the contribution 
[~nandakumar131]!

> Ozone: Corona: move corona from test to tools package
> -
>
> Key: HDFS-12274
> URL: https://issues.apache.org/jira/browse/HDFS-12274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12274-HDFS-7240.000.patch, 
> HDFS-12274-HDFS-7240.001.patch, HDFS-12274-HDFS-7240.002.patch, 
> HDFS-12274-HDFS-7240.003.patch
>
>
> This jira is to move {{Corona}} from test to tools package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-10 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121374#comment-16121374
 ] 

SammiChen commented on HDFS-11082:
--

I uploaded the 002.patch 3 days ago, while the build still not come out so far. 
Ping [~drankye] and [~andrew.wang], could you help to check it? 

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12222) Add EC information to BlockLocation

2017-08-10 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121223#comment-16121223
 ] 

Huafeng Wang commented on HDFS-1:
-

I've checked the related code and found it is not easy to provide other 
functions to get parity or data blocks.
The problem is, LocatedFileStatus is a subclass of FileStatus, both located in 
the hadoop-common module, which does not have file related erasure coding 
policy information. Without that specific policy information, LocatedFileStatus 
has no idea which BlockLocation is actually a parity block. 

After discussed with Kai offline, one approach is to add an ECSchema into 
LocatedFileStatus so that we can determine which blocks are parity blocks if 
erasure coding is enabled. 
Any suggestions here? Thanks.

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11898) DFSClient#isHedgedReadsEnabled() should be per client flag

2017-08-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121250#comment-16121250
 ] 

John Zhuge commented on HDFS-11898:
---

[~shv] Could we get this in first? We can work on non-static pool solution in 
HDFS-11900 since it may have scalability implication. See 
https://issues.apache.org/jira/browse/HDFS-11900?focusedCommentId=16028594=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16028594.

> DFSClient#isHedgedReadsEnabled() should be per client flag 
> ---
>
> Key: HDFS-11898
> URL: https://issues.apache.org/jira/browse/HDFS-11898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11898-01.patch, HDFS-11898-02.patch
>
>
> DFSClient#isHedgedReadsEnabled() returns value based on static 
> {{HEDGED_READ_THREAD_POOL}}. 
> Hence if any of the client initialized this in JVM, all remaining client 
> reads will be going through hedged read itself.
> This flag should be per client value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121262#comment-16121262
 ] 

Hadoop QA commented on HDFS-12054:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 213 unchanged - 0 fixed = 215 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestSafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12054 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881153/HDFS-12054.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d337c5bcb30d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d953c2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20630/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20630/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20630/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20630/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20630/console |
| Powered by | Apache Yetus 

[jira] [Updated] (HDFS-12282) Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode via HB and handles response

2017-08-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12282:
---
Attachment: HDFS-12282.WIP.patch

> Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode 
> via HB and handles response
> -
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12274) Ozone: Corona: move corona from test to tools package

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121279#comment-16121279
 ] 

Weiwei Yang commented on HDFS-12274:


Looks good to me, I am going to test these failed UTs locally, if they are 
irrelevant, I will commit the v3 patch soon. Thanks [~nandakumar131].

> Ozone: Corona: move corona from test to tools package
> -
>
> Key: HDFS-12274
> URL: https://issues.apache.org/jira/browse/HDFS-12274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12274-HDFS-7240.000.patch, 
> HDFS-12274-HDFS-7240.001.patch, HDFS-12274-HDFS-7240.002.patch, 
> HDFS-12274-HDFS-7240.003.patch
>
>
> This jira is to move {{Corona}} from test to tools package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12066:
-
Attachment: HDFS-12066.002.patch

> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> Key: HDFS-12066
> URL: https://issues.apache.org/jira/browse/HDFS-12066
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12066.001.patch, HDFS-12066.002.patch
>
>
> FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Attachment: HDFS-12054.003.patch

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12274) Ozone: Corona: move corona from test to tools package

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121328#comment-16121328
 ] 

Weiwei Yang commented on HDFS-12274:


The tests are failing even without this patch, since the change in this patch 
is trivial, I am going to commit this. Will take a look at the UT failures when 
I get some time. Thanks.

> Ozone: Corona: move corona from test to tools package
> -
>
> Key: HDFS-12274
> URL: https://issues.apache.org/jira/browse/HDFS-12274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12274-HDFS-7240.000.patch, 
> HDFS-12274-HDFS-7240.001.patch, HDFS-12274-HDFS-7240.002.patch, 
> HDFS-12274-HDFS-7240.003.patch
>
>
> This jira is to move {{Corona}} from test to tools package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12282) Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode via HB and handles response

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121355#comment-16121355
 ] 

Weiwei Yang edited comment on HDFS-12282 at 8/10/17 9:34 AM:
-

Attached a WIP patch, I will submit a working patch once HDFS-12283 is 
resolved. Feel free to comment on the initial patch :).


was (Author: cheersyang):
Attached a WIP patch, I will submit a working patch once HDFS-12283 is resolved.

> Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode 
> via HB and handles response
> -
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode via HB and handles response

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121355#comment-16121355
 ] 

Weiwei Yang commented on HDFS-12282:


Attached a WIP patch, I will submit a working patch once HDFS-12283 is resolved.

> Ozone: DeleteKey-4: SCM periodically sends block deletion message to datanode 
> via HB and handles response
> -
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12286:
---

 Summary: Extend MBeans utility to add any key value pairs to the 
registered MXBeans
 Key: HDFS-12286
 URL: https://issues.apache.org/jira/browse/HDFS-12286
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: HDFS-7240


The MBeans class in hadoop-common helps to register MXBean to the platform jmx 
bean. Unfortunatelly it supports only Name and Service keys even if the JMX 
specification allows any key value pairs to use as a part of the ObjectName.

This patch adds the possibility to define more key/value pairs for the JMX 
ObjectName.

It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
jmx properties. But to use a common html component to display them we need a 
possibility to get the JMX bean of SCM server and KSM server with one query.

This will be possible with adding additional (common) key/value property to the 
ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121929#comment-16121929
 ] 

Wei-Chiu Chuang commented on HDFS-12054:


Thanks for the new patch. As a minor improvement,
would you please also add an additional statement after 
{code}
ns.addErasureCodingPolicies(policyArray);
{code}
 just to make sure it throws exception in safe mode?
{code}
fail("AddECPolicyResponse should have failed.");
{code}


> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121940#comment-16121940
 ] 

Steve Loughran commented on HDFS-7240:
--

Putting my ASF process hat on, it is important that anyone interested in 
collaborating is allowed to join in, especially as real time chats tend to be 
exclusive enough anyway.

IRC channel, perhaps?

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11957) Enable POSIX ACL inheritance by default

2017-08-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11957:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~andrew.wang] for the review!

> Enable POSIX ACL inheritance by default
> ---
>
> Key: HDFS-11957
> URL: https://issues.apache.org/jira/browse/HDFS-11957
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11957.001.patch, HDFS-11957.002.patch
>
>
> It is time to enable POSIX ACL inheritance by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12255:
-
Status: Patch Available  (was: Open)

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12255:
-
Attachment: HDFS-12255-HDFS-7240.001.patch

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11900) Hedged reads thread pool creation not synchronized

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122217#comment-16122217
 ] 

Hadoop QA commented on HDFS-11900:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client: The patch 
generated 0 new + 49 unchanged - 1 fixed = 49 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11900 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870502/HDFS-11900.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a2140d60c18 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20642/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20642/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20642/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hedged reads thread pool creation not synchronized
> --
>
> Key: HDFS-11900
> URL: https://issues.apache.org/jira/browse/HDFS-11900
> 

[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122231#comment-16122231
 ] 

Hadoop QA commented on HDFS-5040:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 277 unchanged - 8 fixed = 277 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-5040 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881285/HDFS-5040.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ff42984b63c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20638/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20638/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20638/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20638/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Audit log for admin commands/ logging output of 

[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122254#comment-16122254
 ] 

Hadoop QA commented on HDFS-11576:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
22s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 22s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 29s{color} | {color:orange} root: The patch generated 3 new + 784 unchanged 
- 0 fixed = 787 total (was 784) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 9 
unchanged - 0 fixed = 10 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 24s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11576 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879929/HDFS-11576.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8428a060d4f3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20640/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20640/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 

[jira] [Commented] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122260#comment-16122260
 ] 

Hadoop QA commented on HDFS-12255:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 8 unchanged - 0 fixed = 12 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| Timed out junit tests | org.apache.hadoop.hdfs.TestFileChecksum |
|   | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | org.apache.hadoop.hdfs.TestDFSFinalize |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881297/HDFS-12255-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea3a9224a802 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0e32bf1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-12288:
--
Attachment: HDFS-12288.001.patch

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-12288:
--
Status: Patch Available  (was: In Progress)

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122281#comment-16122281
 ] 

Anu Engineer commented on HDFS-7240:


[~johnament] Would please care to comment on the ASF slack usage for people 
without Apache email ID? 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7240) Object store in HDFS

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122281#comment-16122281
 ] 

Anu Engineer edited comment on HDFS-7240 at 8/10/17 8:37 PM:
-

[~johnament] Would you please care to comment on the ASF slack usage for people 
without Apache email ID? 


was (Author: anu):
[~johnament] Would please care to comment on the ASF slack usage for people 
without Apache email ID? 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-10 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122298#comment-16122298
 ] 

Wellington Chevreuil commented on HDFS-12117:
-

Thanks [~jojochuang]! It seems my commit is depending on commit below, which 
has not been applied yet on branch-2:


||id||date||author||message||
|12c8fdceaf263425661169cba25402df89d444c1|2017-07-11 19:19| John Zhuge | 
HDFS-12052. Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS. 
Contributed by Zoran Dimitrijevic.|

Should I cherry pick this one, together with mine into branch-2?

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-08-10 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122306#comment-16122306
 ] 

Lukas Majercak commented on HDFS-11576:
---

Patch 008 was missing some changes, updating new one in a sec.

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-08-10 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Attachment: HDFS-11576.009.patch

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122343#comment-16122343
 ] 

Anu Engineer commented on HDFS-12238:
-

[~ajayydv] Thanks for the contribution, Some minor comments.

* There is a bunch of check-style warnings which are mostly more than "80 chars 
in a line" warning. 
* I am not sure if the changes in {{Ozonebucket.java}} and {{TestKeys.java}} 
are part of this change.
* Also in the test {{TestOzoneContainer#testInvalidRequest}}, can we be more 
specific instead of just catching IllegalArgumentException.
 -- something like catching an exception and verifying that it is indeed what 
you expect. May be something like {{GenericTestUtils.assertExceptionContains}} 
or JUnit specific checkers can be used. 


> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122364#comment-16122364
 ] 

Anu Engineer commented on HDFS-12255:
-

[~msingh] +1, from me, I will commit after [~vagarychen]'s comments are 
addressed.

bq.  Also, log a warning when UnknownHostException ex happens?
if we decide to log this warning, can we make sure we warn only once? or max a 
few times. Otherwise, for a client where this lookup fails,  the log file will 
be overrun with this warning. So while it might be a good idea to warn, we 
might want to restrict the number of times we warn. We use a similar pattern on 
data-node side, if we are not able to communicate to SCM, we don't warn each 
try, but only at a selected frequency, we log how many times this call has 
failed. 

Another option is to put this as a trace message so that it does not get to the 
log unless we are debugging.


> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12287) Remove a no-longer applicable TODO comment in DatanodeManager

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122380#comment-16122380
 ] 

Hadoop QA commented on HDFS-12287:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881311/HDFS-12287.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9cc19f6e92d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20644/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20644/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20644/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122397#comment-16122397
 ] 

Andrew Wang commented on HDFS-11882:


I spent some time digging into this, and I think I understand it better.

The last stripe can be a partial stripe. If the partial stripe happens to have 
enough data cells, it counts as an acked stripe (i.e., {{numDataBlock}} 
streamers at that length). Then it multiplies by the # bytes in a stripe, which 
can round up the numAckedBytes above the sentBytes.

This partial stripe issue only applies to close. IIUC, we pad out the last data 
cell, and write all the parity cells. Empty cells are assumed to be zero, and 
count toward the minimum durability threshold of {{numDataBlock}} streamers. 
Besides close, we're always writing full stripes.

To be more concrete, imagine we are doing RS(6,3), and the last stripe looks 
like this:

{noformat}
x = full cell

|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |x |x  |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 6 cells have data, which satisfies the 
{{numDataBlocks}} threshold.

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |   |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 4 cells have data, which fails the {{numDataBlocks}} 
threshold. 

Also, because there are supposed to be 5 empty cells, we only need one written 
cell to satisfy the durability requirement. As an example, for a data length of 
one cell, any of these would be fine:

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |   |  |  |  |  |  |  | 
|  |  |   |  |  |  |  |x |  | 
{noformat}

Because this last stripe needs to be handled specially on close, I don't think 
the current proposed patch fully addresses the issue.

We also should try to address this related TODO:

{noformat}
  // TODO we can also succeed if all the failed streamers have not taken
  // the updated block
{noformat}

I'm working on a patch to rework this code, but it's pretty complex, and I 
wanted to post my thinking here first.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Comment Edited] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122397#comment-16122397
 ] 

Andrew Wang edited comment on HDFS-11882 at 8/10/17 9:48 PM:
-

I spent some time digging into this, and I think I understand it better.

The last stripe can be a partial stripe. If the partial stripe happens to have 
enough data cells, it counts as an acked stripe (i.e., {{numDataBlock}} 
streamers at that length). Then it multiplies by the # bytes in a stripe, which 
can round up the numAckedBytes above the sentBytes.

This partial stripe issue only applies to close. IIUC, we pad out the last data 
cell, and write all the parity cells. Empty cells are assumed to be zero, and 
count toward the minimum durability threshold of {{numDataBlock}} streamers. 
Besides close, we're always writing full stripes.

To be more concrete, imagine we are doing RS(6,3), and the last stripe looks 
like this:

{noformat}
x = full cell

|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |x |x  |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 6 cells have data, which satisfies the 
{{numDataBlocks}} threshold.

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |   |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 4 cells have data, which fails the {{numDataBlocks}} 
threshold. 

Also, because there are supposed to be 5 empty cells, we only need one written 
cell to satisfy the durability requirement. As an example, for a data length of 
one cell, any of these would be fine:

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |  |  |  |  |  |  |  | 
|  |  |  |  |  |  |  |x |  | 
{noformat}

Because this last stripe needs to be handled specially on close, I don't think 
the current proposed patch fully addresses the issue.

We also should try to address this related TODO:

{noformat}
  // TODO we can also succeed if all the failed streamers have not taken
  // the updated block
{noformat}

I'm working on a patch to rework this code, but it's pretty complex, and I 
wanted to post my thinking here first.


was (Author: andrew.wang):
I spent some time digging into this, and I think I understand it better.

The last stripe can be a partial stripe. If the partial stripe happens to have 
enough data cells, it counts as an acked stripe (i.e., {{numDataBlock}} 
streamers at that length). Then it multiplies by the # bytes in a stripe, which 
can round up the numAckedBytes above the sentBytes.

This partial stripe issue only applies to close. IIUC, we pad out the last data 
cell, and write all the parity cells. Empty cells are assumed to be zero, and 
count toward the minimum durability threshold of {{numDataBlock}} streamers. 
Besides close, we're always writing full stripes.

To be more concrete, imagine we are doing RS(6,3), and the last stripe looks 
like this:

{noformat}
x = full cell

|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |x |x  |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 6 cells have data, which satisfies the 
{{numDataBlocks}} threshold.

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |   |  |  |  |x |x |x | 
{noformat}

For this partial stripe, 4 cells have data, which fails the {{numDataBlocks}} 
threshold. 

Also, because there are supposed to be 5 empty cells, we only need one written 
cell to satisfy the durability requirement. As an example, for a data length of 
one cell, any of these would be fine:

{noformat}
|d1|d2|d3|d4|d5|d6|p1|p2|p3|
|x |  |   |  |  |  |  |  |  | 
|  |  |   |  |  |  |  |x |  | 
{noformat}

Because this last stripe needs to be handled specially on close, I don't think 
the current proposed patch fully addresses the issue.

We also should try to address this related TODO:

{noformat}
  // TODO we can also succeed if all the failed streamers have not taken
  // the updated block
{noformat}

I'm working on a patch to rework this code, but it's pretty complex, and I 
wanted to post my thinking here first.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> 

[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122411#comment-16122411
 ] 

Hadoop QA commented on HDFS-11303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
18s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881307/HDFS-11303.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9924b9e97719 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (HDFS-12281) Ozone: Ozone-default.xml has 3 properties that do not match the default Config value

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122418#comment-16122418
 ] 

Hadoop QA commented on HDFS-12281:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881305/HDFS-12281-HDFS-7240.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f124f0e46822 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Updated] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-08-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11303:
--
Attachment: HDFS-11303.004.patch

Patch 004
* Fix checkstyle
* Rename futureComplete back to future to avoid gratuitous changes
* A few wordings in comments

> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch, HDFS-11303-001.patch, 
> HDFS-11303-002.patch, HDFS-11303-002.patch, HDFS-11303.003.patch, 
> HDFS-11303.004.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122451#comment-16122451
 ] 

Hadoop QA commented on HDFS-12288:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12288 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881316/HDFS-12288.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d81228349eea 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20645/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20645/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20645/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20645/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
>  

[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122449#comment-16122449
 ] 

Anu Engineer commented on HDFS-12196:
-

[~cheersyang] , Thanks for updating the patch. It looks very good. I am almost 
a +1 for this change. I had some minor questions/comments. 

h6. One Design question
Should we allow each background task to create its own thread pool or just have 
a common thread pool?  
In that case, we will need to have a single service called BackgroundService -- 
and all other jobs like "ContainerRecyclingService" will become a task, say 
"ContainerRecyclingTask".
In other words, I am saying that we can just queue "ContainerRecyclingTask" 
directly to a background service without having individual job execution pools. 
Just one single common pool for all background jobs.
Just wondering if that is an interface we want to offer. I do see the down 
side, we can have a task monopolize the thread pool. If you are free ping me in 
slack or we can have an offline chat about this.

I am also open to checking in this architecture and may be refining it later. 
h6. some code comments

* {{BackgroundService.java}}
bq. threadFactory = r -> new Thread(threadGroup, r);
Generally we expect the background service to be a
deamon thread. Would you like to replace this with
something like 
{code}
new ThreadFactoryBuilder().setDaemon(true)
.setNameFormat( threadName + "#%d")
.build());
{code}
Where threadName is an argument to the ctor ? 

* {{BackgroundService.java}}
Question : Why do we need the testing flag and testing Thread?

* {{BackgroundTaskQueue.java}}
Is not thread safe, is it by design? Just wondering if it is possible for 2 
different threads to call into this concurrently. From a quick code reading, I 
am not able to see it, may we can just add a comment to the class.

*  {{ContainerRecyclingService}}
Should we create a new package called *background* tasks under  
{{org.apache.hadoop.ozone.container.common.statemachine}}
I am presuming we will need much more tasks like this in future.


Ps. Sorry for the delay in code review, I was focused on the pluggable pipeline 
patch.

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch, HDFS-12196-HDFS-7240.003.patch, 
> HDFS-12196-HDFS-7240.004.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122453#comment-16122453
 ] 

Anu Engineer commented on HDFS-12159:
-

started a precommt job by hand.
https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20648/pipeline


> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: createFlow.png, HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12159:

Attachment: (was: createFlow.png)

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122479#comment-16122479
 ] 

Anu Engineer commented on HDFS-12159:
-

the png that I attached was confusing the Jenkins
{code}
HDFS-12159 patch is being downloaded at Thu Aug 10 22:46:11 UTC 2017 from
  https://issues.apache.org/jira/secure/attachment/12881104/createFlow.png -> 
Downloaded
ERROR: Unsure how to process HDFS-12159.
{code}

removed the PNG will resubmit this patch.

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122480#comment-16122480
 ] 

Anu Engineer commented on HDFS-12159:
-

just submitted a new job 

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20649/pipeline

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-10 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122486#comment-16122486
 ] 

Lei (Eddy) Xu commented on HDFS-12221:
--

Hi, [~ajayydv]

Thanks for working this.  It LGTM. 

One nit:

* Is  {{handler.getTransformer().setOutputProperty(OutputKeys.STANDALONE, 
"yes");}} necessary? 

+1 pending.

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: editsStored, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch, HDFS-12221.02.patch, HDFS-12221.03.patch, 
> HDFS-12221.04.patch, HDFS-12221.05.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122487#comment-16122487
 ] 

Lukas Majercak commented on HDFS-12288:
---

Findbugs/unit test failures are unrelated to the change.

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122502#comment-16122502
 ] 

Hanisha Koneru commented on HDFS-12288:
---

Thanks for the fix, [~lukmajercak].
The patch LGTM. 
Just one NIT: In _TestNamenodeCapacityReport#testXceiverCountInternal_, can you 
please update the comment below.
{code}
// the load for writers is 2 because both the write xceiver & packet
// responder threads are counted in the load
expectedTotalLoad += fileRepl;
expectedInServiceLoad += fileRepl;
{code}

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122479#comment-16122479
 ] 

Anu Engineer edited comment on HDFS-12159 at 8/10/17 11:09 PM:
---

the png that I attached was confusing the Jenkins
{code}
HDFS-12159 patch is being downloaded at Thu Aug 10 22:46:11 UTC 2017 from
  https://issues.apache.org/jira/secure/attachment/12881104/createFlow.png -> 
Downloaded
ERROR: Unsure how to process HDFS-12159.
{code}

removed the PNG will resubmit this patch.

cc: [~aw] I had posted a PNG right after I posted the patch. This caused the 
Jenkins to fail since the Jenkins seems to have downloaded PNG for the patch. 
Is there a way to upload PNGs without impacting Jenkins.


was (Author: anu):
the png that I attached was confusing the Jenkins
{code}
HDFS-12159 patch is being downloaded at Thu Aug 10 22:46:11 UTC 2017 from
  https://issues.apache.org/jira/secure/attachment/12881104/createFlow.png -> 
Downloaded
ERROR: Unsure how to process HDFS-12159.
{code}

removed the PNG will resubmit this patch.

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122509#comment-16122509
 ] 

Allen Wittenauer commented on HDFS-12159:
-

bq. Is there a way to upload PNGs without impacting Jenkins.

Yes.  Upload it first.

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122514#comment-16122514
 ] 

Anu Engineer commented on HDFS-12159:
-

[~aw] Thanks for the tip, will do so in the future.


> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11882:
---
Attachment: HDFS-11882.03.patch

Here's a new patch. I added a little more testing, and a lot more comments.

The EC writing logic is quite complicated. I think we need a test that writes 
random file lengths and also randomized fault injection. Writing this kind of 
test would be non-trivial amounts of work, but it'd add a lot of confidence.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12289) HDFS-12091 breaks the tests for provided block reads

2017-08-10 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12289:
-

 Summary: HDFS-12091 breaks the tests for provided block reads
 Key: HDFS-12289
 URL: https://issues.apache.org/jira/browse/HDFS-12289
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Virajith Jalaparti


In the tests within {{TestNameNodeProvidedImplementation}}, the files that are 
supposed to belong to a provided volume are not located under the Storage 
directory assigned to the volume in {{MiniDFSCluster}}. With HDFS-12091, this 
isn't correct and thus, it breaks the tests. This JIRA is to fix the tests 
under {{TestNameNodeProvidedImplementation}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122562#comment-16122562
 ] 

Lukas Majercak commented on HDFS-12288:
---

Sure [~hanishakoneru], do you mind if I just delete it?

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12288) Fix DataNode's xceiver count calculation

2017-08-10 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122564#comment-16122564
 ] 

Hanisha Koneru commented on HDFS-12288:
---

Nope. Deleting should also be fine.

> Fix DataNode's xceiver count calculation
> 
>
> Key: HDFS-12288
> URL: https://issues.apache.org/jira/browse/HDFS-12288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
> Attachments: HDFS-12288.001.patch
>
>
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11738) Hedged pread takes more time when block moved from initial locations

2017-08-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11738:
--
Summary: Hedged pread takes more time when block moved from initial 
locations  (was: hedged pread takes more time when block moved from initial 
locations)

> Hedged pread takes more time when block moved from initial locations
> 
>
> Key: HDFS-11738
> URL: https://issues.apache.org/jira/browse/HDFS-11738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-11738-01.patch, HDFS-11738-02.patch
>
>
> Scenario : 
> Same as HDFS-11708.
> During Hedge read, 
> 1. First two locations fails to read the data in hedged mode.
> 2. chooseData refetches locations and adds a future to read from DN3.
> 3. after adding future to DN3, main thread goes for refetching locations in 
> loop and stucks there till all 3  retries to fetch locations exhausted, which 
> consumes ~20 seconds with exponential retry time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Attachment: (was: HDFS-12054.003.patch)

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122004#comment-16122004
 ] 

Anu Engineer commented on HDFS-7240:


[~steve_l], [~elek] I have asked the slack community how this can be solved. I 
am hopeful there is a way to invite people without Apache email ID. I will 
update this discussion when I hear back from the community in slack. If we 
cannot add people without apache ID, I will move this to IRC as steve suggested.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122011#comment-16122011
 ] 

Hadoop QA commented on HDFS-11082:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11082 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880598/HDFS-11082.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20637/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch, HDFS-11082.002.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12286) Ozone: Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122013#comment-16122013
 ] 

Anu Engineer commented on HDFS-12286:
-

[~elek] The patch looks good to me. I have a question before we commit this. 
Suppose I create a JMX with additional properties. How will it look when I 
access it via the http://namenode:port/jmx, does the additional properties show 
up as normal key values in JSON ? 


> Ozone: Extend MBeans utility to add any key value pairs to the registered 
> MXBeans
> -
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-08-10 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-5040:
--
Attachment: HDFS-5040.007.patch

Fixing checkstyle nits. Test failures are not related. Same goes for findbugs.

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Raghu C Doppalapudi
>Assignee: Kuhu Shukla
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.001.patch, HDFS-5040.004.patch, 
> HDFS-5040.005.patch, HDFS-5040.006.patch, HDFS-5040.007.patch, 
> HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122086#comment-16122086
 ] 

Chen Liang commented on HDFS-12238:
---

Thanks [~msingh] [~anu] for the follow-up! Looks good to me.

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122092#comment-16122092
 ] 

Mukul Kumar Singh commented on HDFS-12238:
--

[~anu] Yes, We can commit this patch, HDFS-12255 will fix Cblock test failures.

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121662#comment-16121662
 ] 

Hadoop QA commented on HDFS-12268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 41s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881197/HDFS-12268-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d8288037985d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0e32bf1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20634/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| unit | 

[jira] [Commented] (HDFS-12209) VolumeScanner scan cursor not save periodic

2017-08-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121661#comment-16121661
 ] 

Wei-Chiu Chuang commented on HDFS-12209:


Thanks for the new patch. I'll probable be able to review it next week.

> VolumeScanner scan cursor not save periodic
> ---
>
> Key: HDFS-12209
> URL: https://issues.apache.org/jira/browse/HDFS-12209
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
> Environment: cdh5.4.0
>Reporter: fatkun
> Attachments: HDFS-12209.002.patch, HDFS-12209.patch
>
>
> The bug introduce from HDFS-7430 , the time is not same, one is monotonicMs 
> and other is clock time. It should use Time.now() both
> VolumeScanner.java
> {code:java}
> long saveDelta = monotonicMs - curBlockIter.getLastSavedMs();
> if (saveDelta >= conf.cursorSaveMs) {
>   LOG.debug("{}: saving block iterator {} after {} ms.",
>   this, curBlockIter, saveDelta);
>   saveBlockIterator(curBlockIter);
> }
> {code}
> curBlockIter.getLastSavedMs() init here
> FsVolumeImpl.java
> {code:java}
> BlockIteratorState() {
>   lastSavedMs = iterStartMs = Time.now();
>   curFinalizedDir = null;
>   curFinalizedSubDir = null;
>   curEntry = null;
>   atEnd = false;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12286:

Status: Patch Available  (was: Open)

> Extend MBeans utility to add any key value pairs to the registered MXBeans
> --
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12286:

Attachment: HDFS-12286-HDFS-7240.001.patch

> Extend MBeans utility to add any key value pairs to the registered MXBeans
> --
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11957) Enable POSIX ACL inheritance by default

2017-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122000#comment-16122000
 ] 

Hudson commented on HDFS-11957:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12161/])
HDFS-11957. Enable POSIX ACL inheritance by default. Contributed by John 
(jzhuge: rev 312e57b95477ec95e6735f5721c646ad1df019f8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Enable POSIX ACL inheritance by default
> ---
>
> Key: HDFS-11957
> URL: https://issues.apache.org/jira/browse/HDFS-11957
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11957.001.patch, HDFS-11957.002.patch
>
>
> It is time to enable POSIX ACL inheritance by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122002#comment-16122002
 ] 

Mukul Kumar Singh commented on HDFS-12238:
--

Hi [~anu] and [~vagarychen],

Test failure with {{TestBufferManager}} and {{TestCBlockReadWrite}} are tracked 
with HDFS-12255.

I have just uploaded a patch to the jira, please have a look.

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Ozone: Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12286:

Summary: Ozone: Extend MBeans utility to add any key value pairs to the 
registered MXBeans  (was: Extend MBeans utility to add any key value pairs to 
the registered MXBeans)

> Ozone: Extend MBeans utility to add any key value pairs to the registered 
> MXBeans
> -
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122050#comment-16122050
 ] 

Anu Engineer commented on HDFS-12238:
-

[~msingh] Does that mean that we can go ahead and commit this? [~vagarychen] 
Are you ok with me committing this patch since Mukul seems to have fixed the 
other failures in HDFS-12255

> Ozone: Add valid trace ID check in sendCommandAsync
> ---
>
> Key: HDFS-12238
> URL: https://issues.apache.org/jira/browse/HDFS-12238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HDFS-12238-HDFS-7240.01.patch
>
>
> In the function {{XceiverClientHandler#sendCommandAsync}} we should add a 
> check 
> {code}
>    if(StringUtils.isEmpty(request.getTraceID())) {
>   throw new IllegalArgumentException("Invalid trace ID");
> }
> {code}
> To ensure that ozone clients always send a valid trace ID. However, when you 
> do that a set of current tests that do add a valid trace ID will fail. So we 
> need to fix these tests too.
> {code}
>   TestContainerMetrics.testContainerMetrics
>   TestOzoneContainer.testBothGetandPutSmallFile
>   TestOzoneContainer.testCloseContainer
>   TestOzoneContainer.testOzoneContainerViaDataNode
>   TestOzoneContainer.testXcieverClientAsync
>   TestOzoneContainer.testCreateOzoneContainer
>   TestOzoneContainer.testDeleteContainer
>   TestContainerServer.testClientServer
>   TestContainerServer.testClientServerWithContainerDispatcher
>   TestKeys.testPutAndGetKeyWithDnRestart
> {code}
> This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2017-08-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122049#comment-16122049
 ] 

Konstantin Shvachko commented on HDFS-9153:
---

Hey guys I think the metrics you introduced is absolutely deceiving, and has 
nothing to do with the throughput the benchmark is intended to measure.
"Test exec time" is the running time of the job, which includes the compute 
overhead: scheduling, cleanup, and retries if there were failed maps.
While we want to benchmark the average throughput of the actual data transfers 
on HDFS. You should see the implementation measures time of transfers only.

The formatting changes are fine. But I think "Total Throughput" should be 
removed.
The bug reported in MAPREDUCE-6931 makes it invalid, but even if fixed it is 
still deceiving.

Also, DFSIO issues should be filed on HDFS jira. Then you should expect more 
prompt response.

> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9153) Pretty-format the output for DFSIO

2017-08-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122049#comment-16122049
 ] 

Konstantin Shvachko edited comment on HDFS-9153 at 8/10/17 6:35 PM:


Hey guys I think the metrics you introduced is absolutely deceiving, and has 
nothing to do with the throughput the benchmark is intended to measure.
"Test exec time" is the running time of the job, which includes the compute 
overhead: scheduling, cleanup, and retries if there were failed maps.
While we want to benchmark the average throughput of the actual data transfers 
on HDFS. You should see the implementation measures time of transfers only.

The formatting changes are fine. But I think "Total Throughput" should be 
removed.
The bug reported in MAPREDUCE-6931 makes it invalid, but even if fixed it is 
still deceiving.

-Also, DFSIO issues should be filed on HDFS jira. Then you should expect more 
prompt response.-
_Sorry last part was for the other jira. Please ignore._


was (Author: shv):
Hey guys I think the metrics you introduced is absolutely deceiving, and has 
nothing to do with the throughput the benchmark is intended to measure.
"Test exec time" is the running time of the job, which includes the compute 
overhead: scheduling, cleanup, and retries if there were failed maps.
While we want to benchmark the average throughput of the actual data transfers 
on HDFS. You should see the implementation measures time of transfers only.

The formatting changes are fine. But I think "Total Throughput" should be 
removed.
The bug reported in MAPREDUCE-6931 makes it invalid, but even if fixed it is 
still deceiving.

Also, DFSIO issues should be filed on HDFS jira. Then you should expect more 
prompt response.

> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-08-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122059#comment-16122059
 ] 

Ravi Prakash commented on HDFS-12273:
-

I didn't realize this was in the Federation branch. This is way out of my 
field. Sorry. Could some branch committers please review it?

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12255:
-
Attachment: HDFS-12255-HDFS-7240.002.patch

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC

2017-08-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122025#comment-16122025
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12159:


XcieverRatisServer looks good.  Thanks a lot!

> Ozone: SCM: Add create replication pipeline RPC
> ---
>
> Key: HDFS-12159
> URL: https://issues.apache.org/jira/browse/HDFS-12159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: createFlow.png, HDFS-12159-HDFS-7240.001.patch, 
> HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch
>
>
> Add an API that allows users to create replication pipelines using SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12287) Remove a no-longer applicable TODO comment in DatanodeManager

2017-08-10 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12287:
-

 Summary: Remove a no-longer applicable TODO comment in 
DatanodeManager
 Key: HDFS-12287
 URL: https://issues.apache.org/jira/browse/HDFS-12287
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Chen Liang
Assignee: Chen Liang
Priority: Trivial


{{DatanodeManager}} has this this TODO comment
{code}
// TODO: Enables DFSNetworkTopology by default after more stress
// testings/validations.
{code}

This has been resolved in HDFS-11998, but it missed removing this comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-08-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-11576:
---
Status: In Progress  (was: Patch Available)

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-08-10 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Status: Patch Available  (was: In Progress)

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-08-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121389#comment-16121389
 ] 

Yiqun Lin commented on HDFS-12268:
--

The failure tests are related. I missed initialize the array 
{{pendingOpsLatency}} in class {{XceiverClientMetrics}}.
Attach the updated patch. Also added unit test for {{XceiverClientMetrics}}. We 
can see the pending metrics are working as we expected.

> Ozone: Add metrics for pending storage container requests
> -
>
> Key: HDFS-12268
> URL: https://issues.apache.org/jira/browse/HDFS-12268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12268-HDFS-7240.001.patch, 
> HDFS-12268-HDFS-7240.002.patch
>
>
>  As storage container async interface has been supported after HDFS-11580, we 
> need to keep an eye on the queue depth of pending container requests. It can 
> help us better found if there are some performance problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121516#comment-16121516
 ] 

Hadoop QA commented on HDFS-12268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 35s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.cblock.TestCBlockReadWrite |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881183/HDFS-12268-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ff95cc302211 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0e32bf1 |
| 

[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-08-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12268:
-
Attachment: HDFS-12268-HDFS-7240.004.patch

Failure tests are not related now.
Attach the new patch to fix ASF warning and checkstyle warning.

> Ozone: Add metrics for pending storage container requests
> -
>
> Key: HDFS-12268
> URL: https://issues.apache.org/jira/browse/HDFS-12268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12268-HDFS-7240.001.patch, 
> HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, 
> HDFS-12268-HDFS-7240.004.patch
>
>
>  As storage container async interface has been supported after HDFS-11580, we 
> need to keep an eye on the queue depth of pending container requests. It can 
> help us better found if there are some performance problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121929#comment-16121929
 ] 

Wei-Chiu Chuang edited comment on HDFS-12054 at 8/10/17 5:20 PM:
-

Thanks for the new patch. As a minor improvement,
would you please also add an additional statement after 
{code}
ns.addErasureCodingPolicies(policyArray);
{code}
 just to make sure it throws exception in safe mode?
{code}
fail("AddECPolicyResponse should have failed.");
{code}

Also, since the scope of patch is small and similar, this jira could be 
consolidated with HDFS-12066, to avoid repeated reviews and updates.


was (Author: jojochuang):
Thanks for the new patch. As a minor improvement,
would you please also add an additional statement after 
{code}
ns.addErasureCodingPolicies(policyArray);
{code}
 just to make sure it throws exception in safe mode?
{code}
fail("AddECPolicyResponse should have failed.");
{code}


> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12285) Better handling of namenode ip address change

2017-08-10 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122746#comment-16122746
 ] 

Ming Ma commented on HDFS-12285:


Thanks [~shahrs87]. Yeah indeed related, although the exception and the 
scenario look different from the other jiras. Even if it is the same, let us 
keep this jira around for validation when we resolve the issue.

> Better handling of namenode ip address change
> -
>
> Key: HDFS-12285
> URL: https://issues.apache.org/jira/browse/HDFS-12285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>
> RPC client layer provides functionality to detect ip address change:
> {noformat}
> Client.java
> private synchronized boolean updateAddress() throws IOException {
>   // Do a fresh lookup with the old host name.
>   InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
>server.getHostName(), server.getPort());
> ..
> }
> {noformat}
> To use this feature, we need to enable retry via 
> {{dfs.client.retry.policy.enabled}}. Otherwise {{TryOnceThenFail}} 
> RetryPolicy will be used; which caused {{handleConnectionFailure}} to throw 
> {{ConnectException}} exception without retrying with the new ip address.
> {noformat}
> private void handleConnectionFailure(int curRetries, IOException ioe
> ) throws IOException {
>   closeConnection();
>   final RetryAction action;
>   try {
> action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
>   } catch(Exception e) {
> throw e instanceof IOException? (IOException)e: new IOException(e);
>   }
>   ..
>   }
> {noformat}
> However, using such configuration isn't ideal. What happens is DFSClient 
> still holds onto the cached old ip address created by {{namenode = 
> proxyInfo.getProxy();}}. Thus when a new rpc connection is created, it starts 
> with the old ip followed by retry with the new ip. It will be nice if 
> DFSClient can update namenode proxy automatically upon ip address change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12289) HDFS-12091 breaks the tests for provided block reads

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122745#comment-16122745
 ] 

Hadoop QA commented on HDFS-12289:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 10s{color} | {color:orange} root: The patch generated 3 new + 219 unchanged 
- 2 fixed = 222 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881367/HDFS-12289-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c429296baf9 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-9806 / 5c2a0a1 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20654/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-fs2img-warnings.html
 |
| checkstyle | 

[jira] [Updated] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12196:
---
Attachment: HDFS-12196-HDFS-7240.005.patch

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch, HDFS-12196-HDFS-7240.003.patch, 
> HDFS-12196-HDFS-7240.004.patch, HDFS-12196-HDFS-7240.005.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122780#comment-16122780
 ] 

Weiwei Yang commented on HDFS-12196:


Thanks [~anu] for the comments. Per offline discussion, since we will reuse 
background service for multiple components, KSM, SCM as well as datanode, lets 
use separate thread pools. For rest of your comments

bq. Generally we expect the background service to be a deamon thread.

Fixed. Now it is constructed like following way

{code}
ThreadFactory tf = r -> new Thread(threadGroup, r);
threadFactory = new ThreadFactoryBuilder()
.setThreadFactory(tf)
.setDaemon(true)
.setNameFormat(serviceName + "#%d")
.build();
{code}

this is to ensure we have all threads contained in a thread group, to manage 
them all together. So we can get # of running threads for this service if 
necessary.

bq. Question : Why do we need the testing flag and testing Thread?
I have refactor that to a single-test-purpose class 
{{ContainerRecyclingServicetTestImpl}}, instead of waiting for intervals, the 
test class run each cycle by a function call, so that I can write more 
finer-grained UT cases.

bq. BackgroundTaskQueue.java is not thread safe, is it by design?
This class doesn't have to be thread safe as there is only 1 thread to fetch 
the tasks. But I use a thread safe implementation in the new patch, in case in 
future we need to access it in multiple threads.

bq. Should we create a new package called background tasks ...
Fixed.

Thank you!

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch, HDFS-12196-HDFS-7240.003.patch, 
> HDFS-12196-HDFS-7240.004.patch, HDFS-12196-HDFS-7240.005.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122786#comment-16122786
 ] 

Anu Engineer commented on HDFS-12196:
-

[~cheersyang] +1 for v5 patch, pending Jenkins. 

I have a small nit, which *we don't need to address* in this patch. My 
apologies that I did not spot it earlier.
Instead of the word *recycling* can we use the word *delete*? I feel that it is 
easier to understand.

Rename ContainerRecyclingService to something like BlockDeletingService. 

Also, change the comment for this class to something like:
{noformat}
A per-datanode block deleting service that deletes 
blocks from active containers.
{noformat}

You don't have to do this now, please feel free to commit. I know you have 3 
more checkins pending on this, so feel free to modify this name is some later 
patch.


> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch, HDFS-12196-HDFS-7240.003.patch, 
> HDFS-12196-HDFS-7240.004.patch, HDFS-12196-HDFS-7240.005.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122790#comment-16122790
 ] 

Weiwei Yang commented on HDFS-12196:


Hi [~anu]

Thanks for your quick response. I will address these comments in this jira with 
next patch, they are not big changes so lets address them here. Thank you.

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch, HDFS-12196-HDFS-7240.003.patch, 
> HDFS-12196-HDFS-7240.004.patch, HDFS-12196-HDFS-7240.005.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12289) [READ] HDFS-12091 breaks the tests for provided block reads

2017-08-10 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12289:
--
Summary: [READ] HDFS-12091 breaks the tests for provided block reads  (was: 
HDFS-12091 breaks the tests for provided block reads)

> [READ] HDFS-12091 breaks the tests for provided block reads
> ---
>
> Key: HDFS-12289
> URL: https://issues.apache.org/jira/browse/HDFS-12289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12289-HDFS-9806.001.patch
>
>
> In the tests within {{TestNameNodeProvidedImplementation}}, the files that 
> are supposed to belong to a provided volume are not located under the Storage 
> directory assigned to the volume in {{MiniDFSCluster}}. With HDFS-12091, this 
> isn't correct and thus, it breaks the tests. This JIRA is to fix the tests 
> under {{TestNameNodeProvidedImplementation}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-10 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12255:
-
Attachment: HDFS-12255-HDFS-7240.003.patch

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12255-HDFS-7240.001.patch, 
> HDFS-12255-HDFS-7240.002.patch, HDFS-12255-HDFS-7240.003.patch
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >