[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085185#comment-16085185
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project: The patch generated 49 new 
+ 960 unchanged - 2 fixed = 1009 total (was 962) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 10 
unchanged - 0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Naked notify in 
org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler.notifyNewSubmission()
  At ReencryptionHandler.java:At ReencryptionHandler.java:[line 813] |
|  |  
org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call()
 uses the same code for two branches  At ReencryptionHandler.java:for two 
branches  At ReencryptionHandler.java:[line 604] |
|  |  
org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$EDEKReencryptCallable.call()
 uses the same code for two branches  At 

[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085154#comment-16085154
 ] 

Nandakumar commented on HDFS-12083:
---

Thanks [~linyiqun] for the review.

In case of invalid start key (startBucket in this case), 
{{LevelDBStore#getRangeKVs}} will throw IOException.  {{rangeResult.remove(0)}} 
will never get called in that case.

>From {{LevelDBStore#getRangeKVs}}

{quote}
   * If the startKey is specified and found in levelDB, this key and the keys
   * after this key will be included in the result. If the startKey is null
   * all entries will be included as long as other conditions are satisfied.
   * If the given startKey doesn't exist, an IOException will be thrown.
{quote}
{quote}
if (db.get(startKey) == null) {
  throw new IOException("Invalid start key, not found in current db.");
}
{quote}

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch, 
> HDFS-12083-HDFS-7240.001.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085118#comment-16085118
 ] 

Yuanbo Liu commented on HDFS-11502:
---

[~xyao]
Thanks for taking care of this JIRA.

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-07-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.10.patch

Patch 10 ready for review.

- Addressed all comments and todos, including some additional offline reviews 
with [~jojochuang]. (Details listed below)
- implement listReencryptionStatus
- completed the multi-thread handler/updater, added testing hook, and related 
unit tests for race conditions
- added edek keyversion comparison, and only re-encrypt files with older edeks
- review and improved locking / EZ iteration

The only TODOs left I think are:
- throttling
- better failure handling when KMS is flaky. (Currently just tells admin 
there're failures and require another re-encryption).


Wei-Chiu's review comments:

Wei-Chiu comments, all reflected in the patch:
- cancelReencryption need to cancel current running tasks.
- stopThreads: make sure all futures are canceled.
- config description: find a better name than reencrypt.interval
- ReencryptionHandler, test methods reduce scope
- ReencryptionUpdater: add more comments. (And rename it, previous name 
Finalizer is confusing)
- ReencryptionUpdater: add class annotation.



> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, 
> Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-07-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Status: Patch Available  (was: Open)

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, 
> Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085081#comment-16085081
 ] 

Yiqun Lin commented on HDFS-12083:
--

Thanks [~nandakumar131] for working on this. One comment: There is a chance 
that will throw exception.
{code}
+List> rangeResult;
+if (!Strings.isNullOrEmpty(startBucket)) {
+  //Since we are excluding start key from the result,
+  // the maxNumOfBuckets is incremented.
+  rangeResult = store.getRangeKVs(
+  getBucketKey(volumeName, startBucket),
+  maxNumOfBuckets + 1, filter);
+  //Remove start key from result.
+  rangeResult.remove(0);
+} else {
+  rangeResult = store.getRangeKVs(null, maxNumOfBuckets, filter);
+}
+
{code}
If we didn't found the expected keys, then we do remove opertions, the 
exception will be thrown.

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch, 
> HDFS-12083-HDFS-7240.001.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover

2017-07-12 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085042#comment-16085042
 ] 

Rushabh S Shah commented on HDFS-11146:
---

[~brahmareddy]: seems like this patch doesn't apply anymore.
Can you please update the patch. In the meantime, I will try to review.
Thanks !


> Excess replicas will not be deleted until all storages's FBR received after 
> failover
> 
>
> Key: HDFS-11146
> URL: https://issues.apache.org/jira/browse/HDFS-11146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11146.patch
>
>
> Excess replicas will not be deleted until all storages's FBR received after 
> failover.
> Thinking following soultion can help.
>  *Solution:* 
> I think after failover, As DNs aware of failover ,so they can send another 
> block report (FBR) irrespective of interval.May be some shuffle can be done, 
> similar to initial delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12128) Namenode failover may make balancer's efforts be in vain

2017-07-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085029#comment-16085029
 ] 

Brahma Reddy Battula commented on HDFS-12128:
-

HDFS-11146 might help on this..? Even I considered this scenario.

> Namenode failover may make balancer's efforts be in vain
> 
>
> Key: HDFS-12128
> URL: https://issues.apache.org/jira/browse/HDFS-12128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.6.0
>Reporter: liuyiyang
>
> The problem can be reproduced as follows:
> 1.In an HA cluster with imbalance datanode usage, we run "start-balancer.sh" 
> to make the cluster balanced;
> 2.Before starting balancer, trigger failover of namenodes, this will make all 
> datanodes be marked as stale by active namenode;
> 3.Start balancer to make the datanode usage balanced;
> 4.As balancer is running, under-utilized datanodes' usage will increase, but 
> over-utilized datanodes' usage will stay unchanged for long time.
> Since all datanodes are marked as stale, deletion will be postponed in stale 
> datanodes. During balancing, the replicas in source datanodes can't be 
> deleted immediately,
> so the total usage of the cluster will increase and won't decrease until 
> datanodes' stale state be cancelled.
> When the datanodes send next block report to namenode(default interval is 
> 6h), active namenode will cancel the stale state of datanodes. I found if 
> replicas on source datanodes can't be deleted immediately in OP_REPLACE 
> operation via del_hint to namenode,
> namenode will schedule replicas on datanodes with least remaining space to 
> delete instead of replicas on source datanodes. Unfortunately, datanodes with 
> least remaining space may be the target datanodes when balancing, which will 
> lead to imbalanced datanode usage again.
> If balancer finishes before next block report, all postponed over-replicated 
> replicas will be deleted based on remaining space of datanodes, this may lead 
> to furitless balancer efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084964#comment-16084964
 ] 

Hudson commented on HDFS-11502:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11997 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11997/])
HDFS-11502. Datanode UI should display hostname based on JMX bean (xyao: rev 
e15e2713e1e344b14d63726639d1c83451921515)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/dn.js
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-12 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11264:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

I have just pushed it to branch!

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11264-HDFS-10285-01.patch, 
> HDFS-11264-HDFS-10285-02.patch, HDFS-11264-HDFS-10285-03.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-12 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084999#comment-16084999
 ] 

Uma Maheswara Rao G commented on HDFS-11264:


+1 on the latest patch!

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch, 
> HDFS-11264-HDFS-10285-02.patch, HDFS-11264-HDFS-10285-03.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084979#comment-16084979
 ] 

Hadoop QA commented on HDFS-11874:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-11874 does not apply to HDFS-10285. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876988/HDFS-11874-HDFS-10285-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20254/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch, 
> HDFS-11874-HDFS-10285-002.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-12 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11874:
---
Attachment: HDFS-11874-HDFS-10285-002.patch

[~rakeshr], Thank you for the review! I have updated the patch by fixing the 
comments. #4 is not needed as that was already covered.

> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch, 
> HDFS-11874-HDFS-10285-002.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
Target Version/s: 2.9.0, 3.0.0-beta1  (was: 2.9.0, 3.0.0-beta1, 2.8.1)

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
Target Version/s: 2.9.0, 3.0.0-beta1, 2.8.2  (was: 2.9.0, 3.0.0-beta1)

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
Fix Version/s: 2.8.2
   3.0.0-beta1
   2.9.0

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
Target Version/s: 2.8.2  (was: 2.9.0, 3.0.0-beta1, 2.8.2)

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12130) Optimizing permission check for getContentSummary

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084927#comment-16084927
 ] 

Hadoop QA commented on HDFS-12130:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 179 unchanged - 2 fixed = 187 total (was 181) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876963/HDFS-12130.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09d6f3a7e18e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 931a498 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20253/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20253/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20253/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20253/testReport/ |
| asflicense | 

[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~jeffreyr97] and [~yuanbo] for the contribution and all for the 
reviews. I've commit the latest patch to trunk, branch-2, branch-2.8, 
branch-2.8.2.



> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) Datanode UI should display hostname based on JMX bean instead of window.location.hostname

2017-07-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11502:
--
Summary: Datanode UI should display hostname based on JMX bean instead of 
window.location.hostname  (was: dn.js set datanode UI to 
window.location.hostname, it should use jmx bean property to setup hostname)

> Datanode UI should display hostname based on JMX bean instead of 
> window.location.hostname
> -
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11989) Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084876#comment-16084876
 ] 

Hadoop QA commented on HDFS-11989:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876940/HDFS-11989-HDFS-7240.20170712.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b567d69c4ec1 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20252/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20252/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Created] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-07-12 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-12131:
--

 Summary: Add some of the FSNamesystem JMX values as metrics
 Key: HDFS-12131
 URL: https://issues.apache.org/jira/browse/HDFS-12131
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, namenode
Reporter: Erik Krogen
Assignee: Erik Krogen
Priority: Minor


A number of useful numbers are emitted via the FSNamesystem JMX, but not 
through the metrics system. These would be useful to be able to track over 
time, e.g. to alert on via standard metrics systems or to view trends and rate 
changes:
* NumLiveDataNodes
* NumDeadDataNodes
* NumDecomLiveDataNodes
* NumDecomDeadDataNodes
* NumDecommissioningDataNodes
* NumStaleStorages

This is a simple change that just requires annotating the JMX methods with 
{{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084857#comment-16084857
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
9s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
27s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
41s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 15s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131 with JDK 
v1.8.0_131 generated 5 new + 5 unchanged - 0 fixed = 10 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 32s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_131 with JDK 
v1.7.0_131 generated 5 new + 5 unchanged - 0 fixed = 10 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
40s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876935/HDFS-12026.HDFS-8707.010.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  

[jira] [Created] (HDFS-12130) Optimizing permission check for getContentSummary

2017-07-12 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12130:
-

 Summary: Optimizing permission check for getContentSummary
 Key: HDFS-12130
 URL: https://issues.apache.org/jira/browse/HDFS-12130
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Chen Liang
Assignee: Chen Liang


Currently, {{getContentSummary}} takes two phases to complete:
- phase1. check the permission of the entire subtree. If any subdirectory does 
not have {{READ_EXECUTE}}, an access control exception is thrown and 
{{getContentSummary}} terminates here (unless it's super user).
- phase2. If phase1 passed, it will then traverse the entire tree recursively 
to get the actual content summary.

An issue is, both phases currently hold the fs lock.

Phase 2 has already been written that, it will yield the fs lock over time, 
such that it does not block other operations for too long. However phase 1 does 
not yield. Meaning it's possible that the permission check phase still blocks 
things for long time.

One fix is to add lock yield to phase 1. But a simpler fix is to merge phase 1 
into phase 2. Namely, instead of doing a full traversal for permission check 
first, we start with phase 2 directly, but for each directory, before obtaining 
its summary, check its permission first. This way we take advantage of existing 
lock yield in phase 2 code and still able to check permission and terminate on 
access exception.

Thanks [~szetszwo] for the offline discussions!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12130) Optimizing permission check for getContentSummary

2017-07-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12130:
--
Attachment: HDFS-12130.001.patch

Post initial patch

> Optimizing permission check for getContentSummary
> -
>
> Key: HDFS-12130
> URL: https://issues.apache.org/jira/browse/HDFS-12130
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12130.001.patch
>
>
> Currently, {{getContentSummary}} takes two phases to complete:
> - phase1. check the permission of the entire subtree. If any subdirectory 
> does not have {{READ_EXECUTE}}, an access control exception is thrown and 
> {{getContentSummary}} terminates here (unless it's super user).
> - phase2. If phase1 passed, it will then traverse the entire tree recursively 
> to get the actual content summary.
> An issue is, both phases currently hold the fs lock.
> Phase 2 has already been written that, it will yield the fs lock over time, 
> such that it does not block other operations for too long. However phase 1 
> does not yield. Meaning it's possible that the permission check phase still 
> blocks things for long time.
> One fix is to add lock yield to phase 1. But a simpler fix is to merge phase 
> 1 into phase 2. Namely, instead of doing a full traversal for permission 
> check first, we start with phase 2 directly, but for each directory, before 
> obtaining its summary, check its permission first. This way we take advantage 
> of existing lock yield in phase 2 code and still able to check permission and 
> terminate on access exception.
> Thanks [~szetszwo] for the offline discussions!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12130) Optimizing permission check for getContentSummary

2017-07-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12130:
--
Status: Patch Available  (was: Open)

> Optimizing permission check for getContentSummary
> -
>
> Key: HDFS-12130
> URL: https://issues.apache.org/jira/browse/HDFS-12130
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12130.001.patch
>
>
> Currently, {{getContentSummary}} takes two phases to complete:
> - phase1. check the permission of the entire subtree. If any subdirectory 
> does not have {{READ_EXECUTE}}, an access control exception is thrown and 
> {{getContentSummary}} terminates here (unless it's super user).
> - phase2. If phase1 passed, it will then traverse the entire tree recursively 
> to get the actual content summary.
> An issue is, both phases currently hold the fs lock.
> Phase 2 has already been written that, it will yield the fs lock over time, 
> such that it does not block other operations for too long. However phase 1 
> does not yield. Meaning it's possible that the permission check phase still 
> blocks things for long time.
> One fix is to add lock yield to phase 1. But a simpler fix is to merge phase 
> 1 into phase 2. Namely, instead of doing a full traversal for permission 
> check first, we start with phase 2 directly, but for each directory, before 
> obtaining its summary, check its permission first. This way we take advantage 
> of existing lock yield in phase 2 code and still able to check permission and 
> terminate on access exception.
> Thanks [~szetszwo] for the offline discussions!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6874:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Weiwei!

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084752#comment-16084752
 ] 

Hudson commented on HDFS-6874:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11995 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11995/])
HDFS-6874. Add GETFILEBLOCKLOCATIONS operation to HttpFS.  Contributed 
(szetszwo: rev 931a49800ef05ee0a6fdc143be1799abb228735d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java


> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12123) Ozone: OzoneClient: Abstraction of OzoneClient and default implementation

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084755#comment-16084755
 ] 

Hadoop QA commented on HDFS-12123:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 14 new + 0 unchanged - 0 fixed = 14 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876667/HDFS-12123-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 892e5728abac 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20249/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20249/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20249/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20249/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084751#comment-16084751
 ] 

Hadoop QA commented on HDFS-12083:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 
28s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876933/HDFS-12083-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 44e6b4f8ffce 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20250/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20250/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch, 
> HDFS-12083-HDFS-7240.001.patch
>
>
> When previous key is set as part of list calls 

[jira] [Updated] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6874:
--
Hadoop Flags: Reviewed
 Component/s: httpfs

+1 the 08 patch looks good.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11989) Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis

2017-07-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11989:
---
Attachment: HDFS-11989-HDFS-7240.20170712.patch

HDFS-11989-HDFS-7240.20170712.patch: fixes checkstyle warnings.

> Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis
> --
>
> Key: HDFS-11989
> URL: https://issues.apache.org/jira/browse/HDFS-11989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11989-HDFS-7240.20170618.patch, 
> HDFS-11989-HDFS-7240.20170620b.patch, HDFS-11989-HDFS-7240.20170620c.patch, 
> HDFS-11989-HDFS-7240.20170620.patch, HDFS-11989-HDFS-7240.20170621b.patch, 
> HDFS-11989-HDFS-7240.20170621c.patch, HDFS-11989-HDFS-7240.20170621.patch, 
> HDFS-11989-HDFS-7240.20170710.patch, HDFS-11989-HDFS-7240.20170712.patch
>
>
> Add Ratis tests similar to TestKeys, TestBuckets and TestVolume.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084618#comment-16084618
 ] 

Hadoop QA commented on HDFS-11973:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876926/HDFS-11973.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 7750dcc06bb6 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 513a361 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20248/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20248/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch, 
> HDFS-11973.HDFS-8707.001.patch
>

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084619#comment-16084619
 ] 

Hadoop QA commented on HDFS-12051:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 627 unchanged - 17 fixed = 634 total (was 644) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876918/HDFS-12051.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b9fe465b2600 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b628d0d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20246/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20246/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20246/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20246/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084556#comment-16084556
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
8s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m  9s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 11s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876925/HDFS-12026.HDFS-8707.009.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  

[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084603#comment-16084603
 ] 

Hadoop QA commented on HDFS-12026:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/20251/console in case of 
problems.


> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.010.patch

Fixed compatibility of thread_local test with the old cmake that we have in our 
CI system.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084572#comment-16084572
 ] 

Nandakumar commented on HDFS-12083:
---

Test failures are related, uploaded patch v1 with test case fixes.

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch, 
> HDFS-12083-HDFS-7240.001.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12123) Ozone: OzoneClient: Abstraction of OzoneClient and default implementation

2017-07-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12123:
--
Status: Patch Available  (was: In Progress)

> Ozone: OzoneClient: Abstraction of OzoneClient and default implementation
> -
>
> Key: HDFS-12123
> URL: https://issues.apache.org/jira/browse/HDFS-12123
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12123-HDFS-7240.000.patch
>
>
> {{OzoneClient}} interface defines all the client operations supported by 
> Ozone. 
> {{OzoneClientImpl}} will have the default implementation, it should connects 
> to KSM, SCM and DataNode through RPC protocol to execute client calls.
> Similarly we should have a client implementation which implements 
> {{OzoneClient}} and uses REST protocol to execute client calls.
> This will provide lots of flexibility to Ozone applications, when 
> applications are running inside the cluster, they can use RPC protocol, but 
> when running from outside the cluster, the same applications can speak REST 
> protocol to communicate with Ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12083:
--
Attachment: HDFS-12083-HDFS-7240.001.patch

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch, 
> HDFS-12083-HDFS-7240.001.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084505#comment-16084505
 ] 

Hadoop QA commented on HDFS-12083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.client.TestBuckets |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876894/HDFS-12083-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5263383b8939 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20244/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20244/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20244/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: 

[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Attachment: HDFS-11973.HDFS-8707.001.patch

Trying again.

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch, 
> HDFS-11973.HDFS-8707.001.patch
>
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Comment: was deleted

(was: Fixed thread_local test.)

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.009.patch
>
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084520#comment-16084520
 ] 

Hadoop QA commented on HDFS-12026:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/20247/console in case of 
problems.


> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Attachment: (was: HDFS-12026.HDFS-8707.009.patch)

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch
>
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.009.patch

Fixed thread_local test.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Attachment: HDFS-12026.HDFS-8707.009.patch

Fixed thread_local test.

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.009.patch
>
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084465#comment-16084465
 ] 

Anatoli Shein commented on HDFS-12026:
--

Some clarification on Libhdfs building process with the latest patch:

During the normal build Libhdfspp is compiled with the default C/C++ compiler 
on the system (specified in CC/CXX variables).

During the CI run the additional second build with the alternative compiler is 
performed. For example if CC/CXX are set to gcc/g++, then the second build will 
be done with clang/clang++, and vice versa. This assumes that both compilers 
are installed on the CI system and can be simply invoked from terminal by 
entering their names. After adding clang to the Dockerfile our current CI 
system supports that. We currently do not support CI systems that have only one 
compiler installed, or systems where gcc links to clang, or clang links to gcc. 
E.g.: if gcc links to clang on some CI system, then the second build would have 
to be done with the actual gcc, and it could be cumbersome to try and find the 
location of the actual gcc compiler on such system automatically.

Please review.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084467#comment-16084467
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
8s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m  9s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 11s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876903/HDFS-12026.HDFS-8707.008.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  

[jira] [Commented] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084423#comment-16084423
 ] 

Anu Engineer commented on HDFS-12129:
-

+1, thanks for fixing this.

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12129-HDFS-7240.001.patch
>
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-07-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: In Progress  (was: Patch Available)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> 
> {code}
> To 

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-07-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Attachment: HDFS-12051.02.patch

I've redesigned the new NameCache so that its size adjusts depending on the 
size of the input data, within user-specified limits.

It was tested using a synthetic workload simulating that of a big Hadoop 
installation. The result is an 8.5% reduction in the overhead due to duplicate 
byte[] arrays. Here are the results of the jxray analysis of the respective 
heap dumps:

Before

{code}
19. DUPLICATE PRIMITIVE ARRAYS

Types of duplicate objects:
 Ovhd Num objs  Num unique objs   Class name

346,198K (12.6%)   12097893  3714559 byte[]
...
Total arrays: 12,101,111  Unique arrays: 3,716,791  Duplicate values: 371,424  
Overhead: 346,322K (12.6%)

===

20. REFERENCE CHAINS FOR DUPLICATE PRIMITIVE ARRAYS

  333,160K (12.1%), 8458874 (99%) dup arrays (368811 unique)
78925 of byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 75981 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 51638 of 
byte[12](99, 99, 108, 95, 117, 95, 102, 108, 97, 103, ...), 50010 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 34126 of 
byte[15](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 24951 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 24394 of 
byte[12](99, 99, 108, 95, 117, 95, 102, 108, 97, 103, ...), 16851 of 
byte[15](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 14746 of 
byte[26](118, 101, 114, 115, 105, 111, 110, 61, 50, 48, ...), 10900 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...)
... and 8076342 more arrays, of which 368801 are unique
 <-- org.apache.hadoop.hdfs.server.namenode.INodeDirectory.name <-- 
org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
org.apache.hadoop.util.LightWeightGSet.entries <-- 
org.apache.hadoop.hdfs.server.namenode.INodeMap.map <-- 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <-- 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <-- 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.this$0
 <-- Java Local@695acbb40 
(org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber)


{code}

After:

{code}
19. DUPLICATE PRIMITIVE ARRAYS

Types of duplicate objects:
 Ovhd Num objs  Num unique objs   Class name

100,440K (3.9%)   6208877  3855398 byte[]
...

Total arrays: 6,212,104  Unique arrays: 3,857,624  Duplicate values: 727,662  
Overhead: 100,566K (3.9%)

===

20. REFERENCE CHAINS FOR DUPLICATE PRIMITIVE ARRAYS

  56,568K (2.2%), 1575637 (96%) dup arrays (232009 unique)
52709 of byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 50009 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 16979 of 
byte[15](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 10899 of 
byte[14](112, 114, 111, 99, 95, 117, 110, 105, 116, 95, ...), 4853 of 
byte[14](114, 112, 116, 95, 112, 114, 100, 61, 50, 48, ...), 4494 of 
byte[14](114, 112, 116, 95, 112, 114, 100, 61, 50, 48, ...), 4396 of 
byte[20](112, 97, 114, 116, 105, 116, 105, 111, 110, 115, ...), 3919 of 
byte[14](114, 112, 116, 95, 112, 114, 100, 61, 50, 48, ...), 3460 of 
byte[14](114, 112, 116, 95, 112, 114, 100, 61, 50, 48, ...), 3452 of 
byte[14](114, 112, 116, 95, 112, 114, 100, 61, 50, 48, ...)
... and 1420457 more arrays, of which 231999 are unique
 <-- org.apache.hadoop.hdfs.server.namenode.INodeDirectory.name <-- 
org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
org.apache.hadoop.util.LightWeightGSet.entries <-- 
org.apache.hadoop.hdfs.server.namenode.INodeMap.map <-- 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <-- 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <-- 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.namesystem <-- Java 
Local@68a849e38 (org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer)
  28,192K (1.1%), 993579 (41%) dup arrays (494398 unique)
3308 of byte[15](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, ...), 3308 of 
byte[15](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, ...), 3308 of byte[15](48, 48, 
48, 48, 48, 48, 95, 48, 95, 99, ...), 3308 of byte[15](48, 48, 48, 48, 48, 48, 
95, 48, 95, 99, ...), 3308 of byte[16](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, 
...), 3308 of byte[15](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, ...), 3308 of 
byte[15](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, ...), 3308 of byte[15](48, 48, 
48, 48, 48, 48, 95, 48, 95, 99, ...), 3307 of byte[16](48, 48, 48, 48, 48, 48, 
95, 48, 95, 99, ...), 3286 of byte[16](48, 48, 48, 48, 48, 48, 95, 48, 95, 99, 
...)
... and 960512 more arrays, of which 494388 are unique
 <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-07-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: Patch Available  (was: In Progress)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> 

[jira] [Commented] (HDFS-12118) Ozone: Ozone shell: Add more testing for volume shell commands

2017-07-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084435#comment-16084435
 ] 

Chen Liang commented on HDFS-12118:
---

Thanks [~linyiqun] for updating the patch and filing the JIRAs!

Some comments about v002 patch though:

1. seems the volume list check is added for only one place in 
{{testListVolumes}}, can we add it for all non-error condition places?

2. using {{.stream()}} makes sense but I don't think {{.filter(item -> 
item.startsWith("test-vol1")).collect(Collectors.toList());}} will do anything 
here. I think what it does is it will keep all elements that has prefix 
"test-vol1" and return as a list. Even if there is, say, zero of such elements, 
it still just returns the list, no error/exception will happen whatsoever. So 
this line won't help check anything. Maybe consider checking the length of the 
returned list, it should be equal to size of original list (i.e. all elements 
pass the filtering) or using {{.allMatch()}} or {{.noneMatch()}} instead. But 
even better I think is to check the content of the list directly. I think here 
it has to be exactly volume10, 12, 14, 16, 18. Otherwise it's wrong. So I think 
we should just check whether it is the case here.

> Ozone: Ozone shell: Add more testing for volume shell commands
> --
>
> Key: HDFS-12118
> URL: https://issues.apache.org/jira/browse/HDFS-12118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12118-HDFS-7240.001.patch, 
> HDFS-12118-HDFS-7240.002.patch
>
>
> Currently there are missing enough testings to cover all the ozone command. 
> Now we have to test this in a manual way to see if commands run okay as 
> expected. There will be lots of test cases we should add for all the volume, 
> bucket and key commands. In order to easy reviewed, I'd like to separate this 
> work to three subtasks. This JIRA is a good start for implementing for volume 
> commands.
> Plan to add the unit test to test following command and its available options:
> * infoVolume
> * updateVolume
> * listVolume
> * createVolume



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084408#comment-16084408
 ] 

Hadoop QA commented on HDFS-12026:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/20245/console in case of 
problems.


> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Description: 
Currently multiple errors and warnings prevent libhdfspp from being compiled 
with clang. It should compile cleanly using flag:
-std=c++11

and also warning flags:
-Weverything -Wno-c++98-compat -Wno-missing-prototypes 
-Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
-Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror

  was:
Currently multiple errors and warnings prevent libhdfspp from being compiled 
with clang. It should compile cleanly using flags:
-std=c++11 -stdlib=libc++

and also warning flags:
-Weverything -Wno-c++98-compat -Wno-missing-prototypes 
-Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
-Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror


> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084399#comment-16084399
 ] 

Hadoop QA commented on HDFS-11264:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 0s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10285 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11264 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876870/HDFS-11264-HDFS-10285-03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af43c98f2219 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / f620377 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20242/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20242/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20242/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20242/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Double checks to ensure 

[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-12 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.008.patch

Added a test for thread_local.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084379#comment-16084379
 ] 

Hadoop QA commented on HDFS-12129:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876882/HDFS-12129-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 646c0a949868 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20243/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20243/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20243/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: SCM http server is not stopped with SCM#stop()
> -
>

[jira] [Updated] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12083:
--
Status: Patch Available  (was: Open)

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5042) Completed files lost after power failure

2017-07-12 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084300#comment-16084300
 ] 

Vinayakumar B commented on HDFS-5042:
-

bq. Wondering if we should make this feature configurable
This is already using the existing configuration of *dfs.datanode.synconclose*, 
I felt that was sufficient.

> Completed files lost after power failure
> 
>
> Key: HDFS-5042
> URL: https://issues.apache.org/jira/browse/HDFS-5042
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
>Reporter: Dave Latham
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, 
> HDFS-5042-03.patch, HDFS-5042-04.patch, HDFS-5042-05-branch-2.patch, 
> HDFS-5042-05.patch, HDFS-5042-branch-2-01.patch, HDFS-5042-branch-2-05.patch, 
> HDFS-5042-branch-2.7-05.patch, HDFS-5042-branch-2.7-06.patch, 
> HDFS-5042-branch-2.8-05.patch, HDFS-5042-branch-2.8-06.patch, 
> HDFS-5042-branch-2.8-addendum.patch
>
>
> We suffered a cluster wide power failure after which HDFS lost data that it 
> had acknowledged as closed and complete.
> The client was HBase which compacted a set of HFiles into a new HFile, then 
> after closing the file successfully, deleted the previous versions of the 
> file.  The cluster then lost power, and when brought back up the newly 
> created file was marked CORRUPT.
> Based on reading the logs it looks like the replicas were created by the 
> DataNodes in the 'blocksBeingWritten' directory.  Then when the file was 
> closed they were moved to the 'current' directory.  After the power cycle 
> those replicas were again in the blocksBeingWritten directory of the 
> underlying file system (ext3).  When those DataNodes reported in to the 
> NameNode it deleted those replicas and lost the file.
> Some possible fixes could be having the DataNode fsync the directory(s) after 
> moving the block from blocksBeingWritten to current to ensure the rename is 
> durable or having the NameNode accept replicas from blocksBeingWritten under 
> certain circumstances.
> Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
> {noformat}
> RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: 
> Creating 
> file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  with permission=rwxrwxrwx
> NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
>  blk_1395839728632046111_357084589
> DN 2013-06-29 11:16:06,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
> blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: 
> /10.0.5.237:50010
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
> blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
> blk_1395839728632046111_357084589 terminating
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
> lease on  file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  from client DFSClient_hb_rs_hs745,60020,1372470111932
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  is closed by DFSClient_hb_rs_hs745,60020,1372470111932
> RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming compacted file at 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  to 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c
> RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Completed major compaction of 7 file(s) in n of 
> 

[jira] [Updated] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12083:
--
Attachment: HDFS-12083-HDFS-7240.000.patch

> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12083) Ozone: KSM: previous key has to be excluded from result in listVolumes, listBuckets and listKeys

2017-07-12 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084311#comment-16084311
 ] 

Nandakumar commented on HDFS-12083:
---

Hi [~cheersyang],
bq. From pagination point of view, it is easier to use prev-key in the listing 
result
True, it will make pagination implementation easier.

I'm uploading initial version of patch, please review.



> Ozone: KSM: previous key has to be excluded from result in listVolumes, 
> listBuckets and listKeys
> 
>
> Key: HDFS-12083
> URL: https://issues.apache.org/jira/browse/HDFS-12083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Critical
> Attachments: HDFS-12083-HDFS-7240.000.patch
>
>
> When previous key is set as part of list calls [listVolume, listBuckets & 
> listKeys], the result includes previous key, there is no need to have this in 
> the result. 
> Since previous key is present as part of result, we will never receive an 
> empty list in the subsequent list calls, this makes it difficult to have a 
> exit criteria where we want to get all the values using multiple list calls 
> (with previous-key set).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Status: Patch Available  (was: Open)

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12129-HDFS-7240.001.patch
>
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Affects Version/s: HDFS-7240

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Target Version/s: HDFS-7240

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12129-HDFS-7240.001.patch
>
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Attachment: HDFS-12129-HDFS-7240.001.patch

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12129-HDFS-7240.001.patch
>
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Description: Found this issue while trying to restarting scm, it failed on 
address already in use error. This is because the http server is not stopped in 
stop() method.

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12129:
--

Assignee: Weiwei Yang

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Component/s: scm
 ozone

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>
> Found this issue while trying to restarting scm, it failed on address already 
> in use error. This is because the http server is not stopped in stop() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12129) Ozone: SCM http server is not stopped with SCM#stop()

2017-07-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12129:
---
Summary: Ozone: SCM http server is not stopped with SCM#stop()  (was: Ozone)

> Ozone: SCM http server is not stopped with SCM#stop()
> -
>
> Key: HDFS-12129
> URL: https://issues.apache.org/jira/browse/HDFS-12129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12129) Ozone

2017-07-12 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12129:
--

 Summary: Ozone
 Key: HDFS-12129
 URL: https://issues.apache.org/jira/browse/HDFS-12129
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Weiwei Yang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084152#comment-16084152
 ] 

Rakesh R edited comment on HDFS-11264 at 7/12/17 3:35 PM:
--

Attached new patch. Here, I have changed the order of Mover status check in 
SPS. With this change, we can avoid the unnecessary instantiation and starting 
of SPS thread if Mover is running. Also, avoids shutdown calls in the exception 
block if the thread is interrupted by the user or admin. [~umamaheswararao], 
please review. Thanks!

+Note:+ I had re-based the branch code on latest trunk code, yesterday. In this 
patch, I have included few EC related modifications to make the test case run 
according to the trunk code.


was (Author: rakeshr):
Attached new patch. Here, I have changed the order of Mover status check in 
SPS. With this change, we can avoid the unnecessary instantiation and starting 
of SPS thread if Mover is running. Also, avoids shutdown calls in the exception 
block if the thread is interrupted by the user or admin. [~umamaheswararao], 
please review. Thanks!

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch, 
> HDFS-11264-HDFS-10285-02.patch, HDFS-11264-HDFS-10285-03.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11264:

Attachment: HDFS-11264-HDFS-10285-03.patch

Attached new patch. Here, I have changed the order of Mover status check in 
SPS. With this change, we can avoid the unnecessary instantiation and starting 
of SPS thread if Mover is running. Also, avoids shutdown calls in the exception 
block if the thread is interrupted by the user or admin. [~umamaheswararao], 
please review. Thanks!

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch, 
> HDFS-11264-HDFS-10285-02.patch, HDFS-11264-HDFS-10285-03.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5042) Completed files lost after power failure

2017-07-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084148#comment-16084148
 ] 

Kihwal Lee commented on HDFS-5042:
--

bq. Here is the addendum patch to move fsync() out of lock.
It should definitely help. I think the only requirement is that fsync to be 
done before acking back to the client. 

> Completed files lost after power failure
> 
>
> Key: HDFS-5042
> URL: https://issues.apache.org/jira/browse/HDFS-5042
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
>Reporter: Dave Latham
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, 
> HDFS-5042-03.patch, HDFS-5042-04.patch, HDFS-5042-05-branch-2.patch, 
> HDFS-5042-05.patch, HDFS-5042-branch-2-01.patch, HDFS-5042-branch-2-05.patch, 
> HDFS-5042-branch-2.7-05.patch, HDFS-5042-branch-2.7-06.patch, 
> HDFS-5042-branch-2.8-05.patch, HDFS-5042-branch-2.8-06.patch, 
> HDFS-5042-branch-2.8-addendum.patch
>
>
> We suffered a cluster wide power failure after which HDFS lost data that it 
> had acknowledged as closed and complete.
> The client was HBase which compacted a set of HFiles into a new HFile, then 
> after closing the file successfully, deleted the previous versions of the 
> file.  The cluster then lost power, and when brought back up the newly 
> created file was marked CORRUPT.
> Based on reading the logs it looks like the replicas were created by the 
> DataNodes in the 'blocksBeingWritten' directory.  Then when the file was 
> closed they were moved to the 'current' directory.  After the power cycle 
> those replicas were again in the blocksBeingWritten directory of the 
> underlying file system (ext3).  When those DataNodes reported in to the 
> NameNode it deleted those replicas and lost the file.
> Some possible fixes could be having the DataNode fsync the directory(s) after 
> moving the block from blocksBeingWritten to current to ensure the rename is 
> durable or having the NameNode accept replicas from blocksBeingWritten under 
> certain circumstances.
> Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
> {noformat}
> RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: 
> Creating 
> file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  with permission=rwxrwxrwx
> NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
>  blk_1395839728632046111_357084589
> DN 2013-06-29 11:16:06,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
> blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: 
> /10.0.5.237:50010
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
> blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
> blk_1395839728632046111_357084589 terminating
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
> lease on  file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  from client DFSClient_hb_rs_hs745,60020,1372470111932
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  is closed by DFSClient_hb_rs_hs745,60020,1372470111932
> RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming compacted file at 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  to 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c
> RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Completed major compaction of 7 file(s) in n of 
> 

[jira] [Commented] (HDFS-5042) Completed files lost after power failure

2017-07-12 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083970#comment-16083970
 ] 

Nathan Roberts commented on HDFS-5042:
--

Wondering if we should make this feature configurable. There are some 
filesystems (like ext4), where these fsync's are affecting much more than the 
datanode process. If YARN is using the same disks and is writing significant 
amounts of intermediate data or performing other disk-heavy operations, the 
entire system will see significantly degraded performance (like disks at 100% 
for 10s of minutes). 

> Completed files lost after power failure
> 
>
> Key: HDFS-5042
> URL: https://issues.apache.org/jira/browse/HDFS-5042
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
>Reporter: Dave Latham
>Assignee: Vinayakumar B
>Priority: Critical
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, 
> HDFS-5042-03.patch, HDFS-5042-04.patch, HDFS-5042-05-branch-2.patch, 
> HDFS-5042-05.patch, HDFS-5042-branch-2-01.patch, HDFS-5042-branch-2-05.patch, 
> HDFS-5042-branch-2.7-05.patch, HDFS-5042-branch-2.7-06.patch, 
> HDFS-5042-branch-2.8-05.patch, HDFS-5042-branch-2.8-06.patch, 
> HDFS-5042-branch-2.8-addendum.patch
>
>
> We suffered a cluster wide power failure after which HDFS lost data that it 
> had acknowledged as closed and complete.
> The client was HBase which compacted a set of HFiles into a new HFile, then 
> after closing the file successfully, deleted the previous versions of the 
> file.  The cluster then lost power, and when brought back up the newly 
> created file was marked CORRUPT.
> Based on reading the logs it looks like the replicas were created by the 
> DataNodes in the 'blocksBeingWritten' directory.  Then when the file was 
> closed they were moved to the 'current' directory.  After the power cycle 
> those replicas were again in the blocksBeingWritten directory of the 
> underlying file system (ext3).  When those DataNodes reported in to the 
> NameNode it deleted those replicas and lost the file.
> Some possible fixes could be having the DataNode fsync the directory(s) after 
> moving the block from blocksBeingWritten to current to ensure the rename is 
> durable or having the NameNode accept replicas from blocksBeingWritten under 
> certain circumstances.
> Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
> {noformat}
> RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: 
> Creating 
> file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  with permission=rwxrwxrwx
> NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
>  blk_1395839728632046111_357084589
> DN 2013-06-29 11:16:06,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
> blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: 
> /10.0.5.237:50010
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
> blk_1395839728632046111_357084589 size 25418340
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
> blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
> DN 2013-06-29 11:16:11,385 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
> blk_1395839728632046111_357084589 terminating
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
> lease on  file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  from client DFSClient_hb_rs_hs745,60020,1372470111932
> NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  is closed by DFSClient_hb_rs_hs745,60020,1372470111932
> RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming compacted file at 
> hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
>  to 
> 

[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-12 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Attachment: HDFS-12117.patch.01

Initial patch version for review only. Still need to work on tests for the 
added functions. Will be providing another patch later with tests added. 

Please let me know on any suggestions/improvements you may have.

Regards,
Wellington.

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.patch.01
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12118) Ozone: Ozone shell: Add more testing for volume shell commands

2017-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083776#comment-16083776
 ] 

Hadoop QA commented on HDFS-12118:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876795/HDFS-12118-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7942b7550913 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20241/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20241/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20241/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Ozone shell: Add more testing for volume shell commands
> --
>
> Key: HDFS-12118
> URL: https://issues.apache.org/jira/browse/HDFS-12118
> 

[jira] [Updated] (HDFS-12118) Ozone: Ozone shell: Add more testing for volume shell commands

2017-07-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12118:
-
Attachment: HDFS-12118-HDFS-7240.002.patch

Thanks [~vagarychen] for the review!
All the comments make sense to me. Attach the updated patch.
Please take the review.


> Ozone: Ozone shell: Add more testing for volume shell commands
> --
>
> Key: HDFS-12118
> URL: https://issues.apache.org/jira/browse/HDFS-12118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12118-HDFS-7240.001.patch, 
> HDFS-12118-HDFS-7240.002.patch
>
>
> Currently there are missing enough testings to cover all the ozone command. 
> Now we have to test this in a manual way to see if commands run okay as 
> expected. There will be lots of test cases we should add for all the volume, 
> bucket and key commands. In order to easy reviewed, I'd like to separate this 
> work to three subtasks. This JIRA is a good start for implementing for volume 
> commands.
> Plan to add the unit test to test following command and its available options:
> * infoVolume
> * updateVolume
> * listVolume
> * createVolume



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12128) Namenode failover may make balancer's efforts be in vain

2017-07-12 Thread liuyiyang (JIRA)
liuyiyang created HDFS-12128:


 Summary: Namenode failover may make balancer's efforts be in vain
 Key: HDFS-12128
 URL: https://issues.apache.org/jira/browse/HDFS-12128
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover
Affects Versions: 2.6.0
Reporter: liuyiyang


The problem can be reproduced as follows:
1.In an HA cluster with imbalance datanode usage, we run "start-balancer.sh" to 
make the cluster balanced;
2.Before starting balancer, trigger failover of namenodes, this will make all 
datanodes be marked as stale by active namenode;
3.Start balancer to make the datanode usage balanced;
4.As balancer is running, under-utilized datanodes' usage will increase, but 
over-utilized datanodes' usage will stay unchanged for long time.

Since all datanodes are marked as stale, deletion will be postponed in stale 
datanodes. During balancing, the replicas in source datanodes can't be deleted 
immediately,
so the total usage of the cluster will increase and won't decrease until 
datanodes' stale state be cancelled.
When the datanodes send next block report to namenode(default interval is 6h), 
active namenode will cancel the stale state of datanodes. I found if replicas 
on source datanodes can't be deleted immediately in OP_REPLACE operation via 
del_hint to namenode,
namenode will schedule replicas on datanodes with least remaining space to 
delete instead of replicas on source datanodes. Unfortunately, datanodes with 
least remaining space may be the target datanodes when balancing, which will 
lead to imbalanced datanode usage again.
If balancer finishes before next block report, all postponed over-replicated 
replicas will be deleted based on remaining space of datanodes, this may lead 
to furitless balancer efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12126) Ozone: Ozone shell: Add more testing for bucket shell commands

2017-07-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12126:
-
Description: 
Adding more unit tests for bucket commands, similar to HDFS-12118.


  was:
Adding more unit testS for bucket commands, similar to HDFS-12118.



> Ozone: Ozone shell: Add more testing for bucket shell commands
> --
>
> Key: HDFS-12126
> URL: https://issues.apache.org/jira/browse/HDFS-12126
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Adding more unit tests for bucket commands, similar to HDFS-12118.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12126) Ozone: Ozone shell: Add more testing for bucket shell commands

2017-07-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12126:
-
Description: 
Adding more unit tests for ozone bucket commands, similar to HDFS-12118.


  was:
Adding more unit tests for bucket commands, similar to HDFS-12118.



> Ozone: Ozone shell: Add more testing for bucket shell commands
> --
>
> Key: HDFS-12126
> URL: https://issues.apache.org/jira/browse/HDFS-12126
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Adding more unit tests for ozone bucket commands, similar to HDFS-12118.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12118) Ozone: Ozone shell: Add more testing for volume shell commands

2017-07-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083678#comment-16083678
 ] 

Yiqun Lin edited comment on HDFS-12118 at 7/12/17 9:07 AM:
---

bq. Additionally, I think it would be great if we follow up with other JIRAs 
for adding more tests for bucket and key commands to this class.
Had filed HDFS-12126 and HDFS-12127 to add unit tests for bucket/key commands. 
I will take some time to implement these in the following days.


was (Author: linyiqun):
bq. Additionally, I think it would be great if we follow up with other JIRAs 
for adding more tests for bucket and key commands to this class.
Had filed HDFS-12126 and HDFS-12167 to add unit tests for bucket/key commands. 
I will take some time to implement these in the following days.

> Ozone: Ozone shell: Add more testing for volume shell commands
> --
>
> Key: HDFS-12118
> URL: https://issues.apache.org/jira/browse/HDFS-12118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12118-HDFS-7240.001.patch, 
> HDFS-12118-HDFS-7240.002.patch
>
>
> Currently there are missing enough testings to cover all the ozone command. 
> Now we have to test this in a manual way to see if commands run okay as 
> expected. There will be lots of test cases we should add for all the volume, 
> bucket and key commands. In order to easy reviewed, I'd like to separate this 
> work to three subtasks. This JIRA is a good start for implementing for volume 
> commands.
> Plan to add the unit test to test following command and its available options:
> * infoVolume
> * updateVolume
> * listVolume
> * createVolume



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12118) Ozone: Ozone shell: Add more testing for volume shell commands

2017-07-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083678#comment-16083678
 ] 

Yiqun Lin commented on HDFS-12118:
--

bq. Additionally, I think it would be great if we follow up with other JIRAs 
for adding more tests for bucket and key commands to this class.
Had filed HDFS-12126 and HDFS-12167 to add unit tests for bucket/key commands. 
I will take some time to implement these in the following days.

> Ozone: Ozone shell: Add more testing for volume shell commands
> --
>
> Key: HDFS-12118
> URL: https://issues.apache.org/jira/browse/HDFS-12118
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12118-HDFS-7240.001.patch, 
> HDFS-12118-HDFS-7240.002.patch
>
>
> Currently there are missing enough testings to cover all the ozone command. 
> Now we have to test this in a manual way to see if commands run okay as 
> expected. There will be lots of test cases we should add for all the volume, 
> bucket and key commands. In order to easy reviewed, I'd like to separate this 
> work to three subtasks. This JIRA is a good start for implementing for volume 
> commands.
> Plan to add the unit test to test following command and its available options:
> * infoVolume
> * updateVolume
> * listVolume
> * createVolume



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12127) Ozone: Ozone shell: Add more testing for key shell commands

2017-07-12 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12127:


 Summary: Ozone: Ozone shell: Add more testing for key shell 
commands
 Key: HDFS-12127
 URL: https://issues.apache.org/jira/browse/HDFS-12127
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, tools
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Adding more unit tests for ozone key commands, similar to HDFS-12118.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12126) Ozone: Ozone shell: Add more testing for bucket shell commands

2017-07-12 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12126:


 Summary: Ozone: Ozone shell: Add more testing for bucket shell 
commands
 Key: HDFS-12126
 URL: https://issues.apache.org/jira/browse/HDFS-12126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, tools
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Adding more unit testS for bucket commands, similar to HDFS-12118.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4262) Backport HTTPFS to Branch 1

2017-07-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083637#comment-16083637
 ] 

Andras Bokor commented on HDFS-4262:


Seems like obsolete. Is it still intended to fix?

> Backport HTTPFS to Branch 1
> ---
>
> Key: HDFS-4262
> URL: https://issues.apache.org/jira/browse/HDFS-4262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
> Environment: IBM JDK, RHEL 6.3
>Reporter: Eric Yang
>Assignee: Yu Li
> Attachments: 01-retrofit-httpfs-cdh3u4-for-hadoop1.patch, 
> 02-cookie-from-authenticated-url-is-not-getting-to-auth-filter.patch, 
> 03-resolve-proxyuser-related-issue.patch, HDFS-4262-github.patch
>
>
> There are interests to backport HTTPFS back to Hadoop 1 branch.  After the 
> initial investigation, there're quite some changes in HDFS-2178, and several 
> related patches, including:
> HDFS-2284 Write Http access to HDFS
> HDFS-2646 Hadoop HttpFS introduced 4 findbug warnings
> HDFS-2649 eclipse:eclipse build fails for hadoop-hdfs-httpfs
> HDFS-2657 TestHttpFSServer and TestServerWebApp are failing on trunk
> HDFS-2658 HttpFS introduced 70 javadoc warnings
> The most challenge of backporting is all these patches, including HDFS-2178 
> are for 2.X, which  code base has been refactored a lot and quite different 
> from 1.X, so it seems we have to backport the changes manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083634#comment-16083634
 ] 

Yuanbo Liu edited comment on HDFS-12069 at 7/12/17 8:22 AM:


[~cheersyang] Thanks for your update. Looks good to me, I'm +1(no-binding) for 
your v11 patch.
Compile successfully in my local laptop


was (Author: yuanbo):
[~cheersyang] Thanks for your update. Looks good to me, I'm +1(no-binding) for 
your v11 patch.

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch, HDFS-12069-HDFS-7240.009.patch, 
> HDFS-12069-HDFS-7240.010.patch, HDFS-12069-HDFS-7240.011.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083634#comment-16083634
 ] 

Yuanbo Liu commented on HDFS-12069:
---

[~cheersyang] Thanks for your update. Looks good to me, I'm +1(no-binding) for 
your v11 patch.

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch, HDFS-12069-HDFS-7240.009.patch, 
> HDFS-12069-HDFS-7240.010.patch, HDFS-12069-HDFS-7240.011.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9126) namenode crash in fsimage download/transfer

2017-07-12 Thread linhaiqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083601#comment-16083601
 ] 

linhaiqiang commented on HDFS-9126:
---

We have already encounter simliar issue. As logs shown, an unexpected 45 sec 
timeout occured when active namenode's corresponding healthMonitor was checking 
namenode's health(health check request emited at 2017-07-06 17:22:12). The 
timeout makes healthMonitor believes active namenode had failed, and thus 
closed zk client to quit leader election. However, we did not find any 
exception in namenode's log and gc log. A more wired thing is that namenode's 
ends logging at 2017-07-06 17:20:37.

After analysing the source code, the scenario is shown below:
1. when activce namenode' corresponding healthMonitor is checking NN's health, 
a 45 secends timeout makes state switched to SERVICE_NOT_RESPONDING.
2. Active NN quits leader election. zkClinet close connection and, thus, 
temporary lock znode on zk is deleted.
3. standby NN gain the leadership from zk in order to switch itself to new 
active NN.
4. To prevent 'Split-Brain', before switching to active state, standby NN sends 
rpc request to switch current active NN' state to standby.
5. If this rpc request fails, standby NN tries to kill target NN jvm via ssh. 
This is the exactly same to our case.
6. After killing target NN successfully, standby NN switchs itself to activce.

Though we have understood the scenario, what happend in previous active NN is 
still unknown. Sadly, Jstack trace can be found no longer since jvm is killed 
by standby nn. Does anyone understand or have similar prolem? Thanks for 
sharing in advance.

namenode log
2017-07-06 17:20:25,944 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 192.168.74.160:50010 is added to 
blk_8181385931_7145837309{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-8b015249-3dfc-46d6-b575-a5217dd3e40e:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-a88850be-de4f-4cf8-b6ec-10c8116c4226:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-09006ca0-3bd0-41a2-b4b9-90a28682031b:NORMAL|RBW]]}
 size 0
2017-07-06 17:20:25,945 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 192.168.8.230:50010 is added to 
blk_8181385931_7145837309{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-8b015249-3dfc-46d6-b575-a5217dd3e40e:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-a88850be-de4f-4cf8-b6ec-10c8116c4226:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-09006ca0-3bd0-41a2-b4b9-90a28682031b:NORMAL|RBW]]}
 size 0
2017-07-06 17:20:25,945 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 192.168.74.79:50010 is added to 
blk_8181385931_7145837309{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-8b015249-3dfc-46d6-b575-a5217dd3e40e:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-a88850be-de4f-4cf8-b6ec-10c8116c4226:NORMAL|RBW],
 
ReplicaUnderConstruction[[DISK]DS-09006ca0-3bd0-41a2-b4b9-90a28682031b:NORMAL|RBW]]}
 size 0
2017-07-06 17:20:25,945 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/tmp/hive-mobdss/hive_2017-07-06_17-19-52_775_5225473556650420863-1/_task_tmp.-ext-10003/_tmp.00_0
 is closed by DFSClient_attempt_1482378778761_39818303_m_00_0_1289941958_1
2017-07-06 17:20:25,946 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/hbase/data/ns_spider/p_site_product/2a79796e1be609fc26ecf1ab58f5aac9/.tmp/733c7b7008594135ae6fcb540f0ca4d5
 is closed by 
DFSClient_hb_rs_slave557-prd3.hadoop.com,60020,1498730232536_-2089669913_35
2017-07-06 17:20:26,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: 
/tmp/hadoop-yarn/staging/mobdss/.staging/job_1482378778761_39818325/job_1482378778761_39818325_1.jhist
 for DFSClient_NONMAPREDUCE_-907911548_1
2017-07-06 17:20:30,826 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: 
/tmp/hadoop-yarn/staging/yyadmin/.staging/job_1482378778761_39818328/job_1482378778761_39818328_1.jhist
 for DFSClient_NONMAPREDUCE_-1343712942_1
2017-07-06 17:20:32,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: 
/tmp/hadoop-yarn/staging/yyadmin/.staging/job_1482378778761_39818327/job_1482378778761_39818327_1.jhist
 for DFSClient_NONMAPREDUCE_-904958265_1
2017-07-06 17:20:33,722 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: 
/tmp/hadoop-yarn/staging/yyadmin/.staging/job_1482378778761_39818341/job_1482378778761_39818341_1.jhist
 for DFSClient_NONMAPREDUCE_1250585342_1
2017-07-06 17:20:37,402 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
Rescanning after 3 milliseconds


zkfc log
2017-07-06 17:22:12,264 WARN org.apache.hadoop.ha.HealthMonitor: 
Transport-level exception trying to monitor health of Na
meNode at hostname1/hostname1:8020: Call From hostname1/hostname1 to 
namenode1-pr

[jira] [Resolved] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia resolved HDFS-12109.
---
Resolution: Not A Bug

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083520#comment-16083520
 ] 

Luigi Di Fraia edited comment on HDFS-12109 at 7/12/17 6:42 AM:


Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace. Oddly enough, the property I 
was passing on the commandline was correctly defined, which somehow masked out 
the hdfs-site.xml configuration issue.
I am resolving the issue as "Not a bug".
Thanks again.


was (Author: luigidifraia):
Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083520#comment-16083520
 ] 

Luigi Di Fraia commented on HDFS-12109:
---

Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org