[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-07-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104471#comment-16104471
 ] 

Xiao Chen commented on HDFS-10899:
--

Had a productive offline review session with [~andrew.wang], where we discussed 
about several things. Thanks Andrew!

- By design snapshots are immutable, so even after re-encryption, the snapshots 
of the EZ will still have old edeks. If there was security breach, admin's need 
to remove old snapshots, or take manual measures (e.g. cp, or mv to a 
non-snapshot dir then re-encrypt). Should add this to docs. This jira should 
not attempt to touch snapshots.
- Perf -> latency
I have been running this in a test cluster (with kerberos + SSL), with 1 sample 
EZ and 1M files. Cluster is on 
[GCE|https://cloud.google.com/compute/docs/machine-types], NN is n1-highmem-4, 
2 KMS instances on n1-standard-2.
Some perf numbers (when all 1M files have old edeks):
-- 1 edek thread: 40~50 mins.
-- 10 edek threads (5000 edeks per task): 13 mins
-- 30 edek threads (1000 edeks per task): 12 mins
Time to generate all the tasks for 1M files is ~10 seconds.\\
The bottleneck of the entire operation is on contacting KMS - from NN side the 
HTTPS connection to KMS took an average of single digit milliseconds per 
request, where inside the KMS the actual re-encryption only took 10s of 
microseconds. The default keep-alive of 5 connections is used, and the first 5 
connections (clean setup) took even longer.
This leads me to prototype a batched re-encryption interface on the KMS, and 
the perf of that is:
---   20 threads (1000 per task): 1.5 min.
Which well fits in our 200M files within 8 hours goal.

Discussing with Andrew, we felt the batched API is the way to go. I will file 
another jira to add a batched re-encryption API to the KMS, and update this 
patch to use that.

- Perf - memory
Above test is done without any throttling. We should throttle the 
{{ReencryptionHandler}} when instantiating Callables, to keep NN memory sane. 
The plan is to use a static calculation, so we only keep a configurable # of 
Callables in memory - the handler simply waits until a Callable is done and 
released before creating a new one. Will have a default number calculated from 
# of cores of the NN. Surely we should also considers how many edeks per 
Callable. Will implement this soon.

- Perf - lock throttling
Ideally we'd also throttle the {{ReencryptionHandler}} to control what % of 
time it can hold the readlock, and similarly {{ReencryptionUpdater}} for 
writelock. But since we already need to wait for the Callables, this kinda 
comes naturally. I.e. we won't be holding a writelock continuously for a long 
time. So we may not implement this in v1, pending confirmation from further 
perf runs.

- Failure handling:
Now that we use a batched re-encryption, it makes sense to simply retry the 
entire Callable (hence entire batch, since that just fails in 1 call). Then it 
sounds more admin-friendly to simply retry forever, with backoffs. If admin 
finds this annoying he can always cancel. This is better than the current way 
of fail the re-encryption after a few times, tell the admin, and force him to 
rerun the command.
Also should add fault injectors to unit test some failure scenarios.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt 
> edek design doc.pdf, Re-encrypt edek design doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs

2017-07-27 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12036:

Attachment: HDFS-12036.002.patch

> Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, 
> getErasureCodingCodecs
> --
>
> Key: HDFS-12036
> URL: https://issues.apache.org/jira/browse/HDFS-12036
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch
>
>
> These three FSNameSystem operations do not yet record audit logs. I am not 
> sure how useful these audit logs would be, but thought I should file them so 
> that they don't get dropped if they turn out to be needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104452#comment-16104452
 ] 

Hadoop QA commented on HDFS-12036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 178 unchanged - 0 fixed = 180 total (was 178) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879289/HDFS-12036.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8f13f3fa6d8e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38c6fa5 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20459/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20459/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20459/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20459/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-3745) fsck prints that it's using KSSL even when it's in fact using SPNEGO for authentication

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104431#comment-16104431
 ] 

Hadoop QA commented on HDFS-3745:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} root: The patch generated 0 new + 367 unchanged - 1 
fixed = 367 total (was 368) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.common.TestJspHelper |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-3745 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12751156/HDFS-3745.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2b0e674ff77a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-11203) Rename support during re-encrypt EDEK

2017-07-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104409#comment-16104409
 ] 

Xiao Chen commented on HDFS-11203:
--

Hi [~daryn],

I discussed with [~andrew.wang] today, and our speculation is you're proposing 
only about re-resolution, not how we iterate the EZ. I have edited the 
description of this jira to state the problem clearer. Could you check and see 
if we're on the same page?

Thanks.

> Rename support during re-encrypt EDEK
> -
>
> Key: HDFS-11203
> URL: https://issues.apache.org/jira/browse/HDFS-11203
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: encryption
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> Currently HDFS-10899 disables renames within the EZ if it's under 
> re-encryption. (similar to current cross-zone rename checks).
> We'd like to support rename in the long run, so cluster is fully functioning 
> during re-encryption.
> The reason rename is particularly difficult is:
> - We want to re-encrypt all files under an EZ in one pass, without missing any
> - We want to iterate through the files and keep track of where we are (i.e. a 
> cursor), so in case of NN failover/crash, we can resume from fsimage/edits.
> - We cannot guarantee namespace is not changed during re-encryption. Newly 
> created files automatically has new edek, deleted files we don't care. But if 
> a file is renamed from behind the cursor to before, it may be missed in the 
> re-encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104408#comment-16104408
 ] 

Anu Engineer edited comment on HDFS-12034 at 7/28/17 3:49 AM:
--

[~elek] Can you please look at the jenkins errors.



was (Author: anu):
[~elek] Can you please look at the compiler errors.


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104408#comment-16104408
 ] 

Anu Engineer commented on HDFS-12034:
-

[~elek] Can you please look at the compiler errors.


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11203) Rename support during re-encrypt EDEK

2017-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11203:
-
Description: 
Currently HDFS-10899 disables renames within the EZ if it's under 
re-encryption. (similar to current cross-zone rename checks).

We'd like to support rename in the long run, so cluster is fully functioning 
during re-encryption.

The reason rename is particularly difficult is:
- We want to re-encrypt all files under an EZ in one pass, without missing any
- We want to iterate through the files and keep track of where we are (i.e. a 
cursor), so in case of NN failover/crash, we can resume from fsimage/edits.
- We cannot guarantee namespace is not changed during re-encryption. Newly 
created files automatically has new edek, deleted files we don't care. But if a 
file is renamed from behind the cursor to before, it may be missed in the 
re-encryption.

  was:
Currently HDFS-10899 disables renames within the EZ if it's under 
re-encryption. (similar to current cross-zone rename checks).

We'd like to support rename in the long run, so cluster is fully functioning 
during re-encryption


> Rename support during re-encrypt EDEK
> -
>
> Key: HDFS-11203
> URL: https://issues.apache.org/jira/browse/HDFS-11203
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: encryption
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> Currently HDFS-10899 disables renames within the EZ if it's under 
> re-encryption. (similar to current cross-zone rename checks).
> We'd like to support rename in the long run, so cluster is fully functioning 
> during re-encryption.
> The reason rename is particularly difficult is:
> - We want to re-encrypt all files under an EZ in one pass, without missing any
> - We want to iterate through the files and keep track of where we are (i.e. a 
> cursor), so in case of NN failover/crash, we can resume from fsimage/edits.
> - We cannot guarantee namespace is not changed during re-encryption. Newly 
> created files automatically has new edek, deleted files we don't care. But if 
> a file is renamed from behind the cursor to before, it may be missed in the 
> re-encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12205) Ozone: List Key on an empty ozone bucket fails with command failed error

2017-07-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104403#comment-16104403
 ] 

Yiqun Lin commented on HDFS-12205:
--

Hi [~msingh], thanks for reporting this. I think you are testing in the local 
handler mode. Because in the distributed handler, current list key won't return 
{{createdOn}} and {{md5hash}}.

> Ozone: List Key on an empty ozone bucket fails with command failed error
> 
>
> Key: HDFS-12205
> URL: https://issues.apache.org/jira/browse/HDFS-12205
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> Tried listing an empty bucket and it fails with
> {code}
> [root@88970a014980 opt]# hadoop-3.0.0-alpha4-SNAPSHOT/bin/hdfs oz -listKey 
> http://localhost:9864/vol1/bucket1
> Command Failed : 
> {"httpCode":400,"shortMessage":"invalidResourceName","resource":"vol1/bucket1","message":"Invalid
>  volume, bucket or key 
> name.","requestID":"a38471bb-3fbf-416f-b89d-758506185244","hostName":"88970a014980"}
> {code}
> However after adding one key the same command works.
> {code}
> [root@88970a014980 opt]# hadoop-3.0.0-alpha4-SNAPSHOT/bin/hdfs oz -listKey 
> http://localhost:9864/vol1/bucket1
> {
>   "version" : 0,
>   "md5hash" : "d41d8cd98f00b204e9800998ecf8427e",
>   "createdOn" : "Thu, 27 Jul 2017 11:43:55 +",
>   "size" : 0,
>   "keyName" : "key1"
> }
> {code}
> I feel that for an empty bucket, an empty json should be returned.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104385#comment-16104385
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
2 unchanged - 1 fixed = 3 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.replication.TestContainerReplicationManager |
|   | hadoop.ozone.scm.node.TestQueryNode |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || 

[jira] [Updated] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs

2017-07-27 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12036:

Status: Patch Available  (was: Open)

> Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, 
> getErasureCodingCodecs
> --
>
> Key: HDFS-12036
> URL: https://issues.apache.org/jira/browse/HDFS-12036
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12036.001.patch
>
>
> These three FSNameSystem operations do not yet record audit logs. I am not 
> sure how useful these audit logs would be, but thought I should file them so 
> that they don't get dropped if they turn out to be needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs

2017-07-27 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12036:

Attachment: HDFS-12036.001.patch

> Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, 
> getErasureCodingCodecs
> --
>
> Key: HDFS-12036
> URL: https://issues.apache.org/jira/browse/HDFS-12036
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12036.001.patch
>
>
> These three FSNameSystem operations do not yet record audit logs. I am not 
> sure how useful these audit logs would be, but thought I should file them so 
> that they don't get dropped if they turn out to be needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104377#comment-16104377
 ] 

Hadoop QA commented on HDFS-10326:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818243/HDFS-10326.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  

[jira] [Commented] (HDFS-12198) Document missing namenode metrics that added recently

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104366#comment-16104366
 ] 

Hadoop QA commented on HDFS-12198:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12198 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879282/HDFS-12198.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 3d23afa33fbc 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38c6fa5 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20458/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document missing namenode metrics that added recently
> -
>
> Key: HDFS-12198
> URL: https://issues.apache.org/jira/browse/HDFS-12198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-12198.001.patch
>
>
> There are some namenode metrics added recently but haven't been documented in 
> {{Metrics.md}}. Totally following metrics and related JIRAs:
> *HDFS-12043*:
> {noformat}
>   @Metric ("Number of successful re-replications")
>MutableCounterLong successfulReReplications;
>@Metric ("Number of times we failed to schedule a block re-replication.")
>MutableCounterLong numTimesReReplicationNotScheduled;
>@Metric("Number of timed out block re-replications")
>   MutableCounterLong timeoutReReplications;
> {noformat}
> *HDFS-11907*:
> {noformat}
> @Metric("Resource check time") private MutableRate resourceCheckTime;
> private final MutableQuantiles[] resourceCheckTimeQuantiles;
>  {noformat}
> *HADOOP-14502*:
> {noformat}
> @Metric("Number of blockReports from individual storages")
>  final MutableRate storageBlockReport;
>  final MutableQuantiles[] storageBlockReportQuantiles;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12198) Document missing namenode metrics that added recently

2017-07-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12198:
-
Attachment: HDFS-12198.001.patch

Attach the patch. Kindly review.

> Document missing namenode metrics that added recently
> -
>
> Key: HDFS-12198
> URL: https://issues.apache.org/jira/browse/HDFS-12198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-12198.001.patch
>
>
> There are some namenode metrics added recently but haven't been documented in 
> {{Metrics.md}}. Totally following metrics and related JIRAs:
> *HDFS-12043*:
> {noformat}
>   @Metric ("Number of successful re-replications")
>MutableCounterLong successfulReReplications;
>@Metric ("Number of times we failed to schedule a block re-replication.")
>MutableCounterLong numTimesReReplicationNotScheduled;
>@Metric("Number of timed out block re-replications")
>   MutableCounterLong timeoutReReplications;
> {noformat}
> *HDFS-11907*:
> {noformat}
> @Metric("Resource check time") private MutableRate resourceCheckTime;
> private final MutableQuantiles[] resourceCheckTimeQuantiles;
>  {noformat}
> *HADOOP-14502*:
> {noformat}
> @Metric("Number of blockReports from individual storages")
>  final MutableRate storageBlockReport;
>  final MutableQuantiles[] storageBlockReportQuantiles;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12198) Document missing namenode metrics that added recently

2017-07-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12198:
-
Status: Patch Available  (was: Open)

> Document missing namenode metrics that added recently
> -
>
> Key: HDFS-12198
> URL: https://issues.apache.org/jira/browse/HDFS-12198
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-12198.001.patch
>
>
> There are some namenode metrics added recently but haven't been documented in 
> {{Metrics.md}}. Totally following metrics and related JIRAs:
> *HDFS-12043*:
> {noformat}
>   @Metric ("Number of successful re-replications")
>MutableCounterLong successfulReReplications;
>@Metric ("Number of times we failed to schedule a block re-replication.")
>MutableCounterLong numTimesReReplicationNotScheduled;
>@Metric("Number of timed out block re-replications")
>   MutableCounterLong timeoutReReplications;
> {noformat}
> *HDFS-11907*:
> {noformat}
> @Metric("Resource check time") private MutableRate resourceCheckTime;
> private final MutableQuantiles[] resourceCheckTimeQuantiles;
>  {noformat}
> *HADOOP-14502*:
> {noformat}
> @Metric("Number of blockReports from individual storages")
>  final MutableRate storageBlockReport;
>  final MutableQuantiles[] storageBlockReportQuantiles;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104340#comment-16104340
 ] 

Hadoop QA commented on HDFS-12034:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 9 
unchanged - 111 fixed = 9 total (was 120) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 29s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 8 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Unread field:ServiceRuntimeInfo.java:[line 51] |
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.ozone.ksm.TestKSMMetrcis |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDFS-11733) TestGetBlocks.getBlocksWithException() ignores datanode and size parameters.

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104335#comment-16104335
 ] 

Hadoop QA commented on HDFS-11733:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12867287/HDFS-11733.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 22a8c7157e76 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38c6fa5 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20451/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20451/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20451/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Updated] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11785:
---
Fix Version/s: (was: 2.7.4)

> Backport HDFS-9902 to branch-2.7: Support different values of 
> dfs.datanode.du.reserved per storage type
> ---
>
> Key: HDFS-11785
> URL: https://issues.apache.org/jira/browse/HDFS-11785
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-11785-branch-2.7.patch
>
>
> As per discussussion in [mailling 
> list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser]
>  backport HDFS-9902 to branch-2.7



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11732) Backport HDFS-8498 to branch-2.7: Blocks can be committed with wrong size

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11732:
---
Fix Version/s: (was: 2.7.4)

> Backport HDFS-8498 to branch-2.7: Blocks can be committed with wrong size
> -
>
> Key: HDFS-11732
> URL: https://issues.apache.org/jira/browse/HDFS-11732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Attachments: HDFS-11732-branch-2.7.00.patch, 
> HDFS-11732-branch-2.7.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104270#comment-16104270
 ] 

Hadoop QA commented on HDFS-12151:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDataXceiverBackwardsCompat |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879264/HDFS-12151.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db46578c526c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3c7300 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20447/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104252#comment-16104252
 ] 

Andrew Wang commented on HDFS-12044:


BTW did we JIRA this follow-on?

bq. that DataNode#transferBlock does not create its Daemon in the xceiver 
thread group (which is how we currently count the # of xceivers).

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> ---
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch, HDFS-12044.04.patch, 
> HDFS-12044.05.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104251#comment-16104251
 ] 

Andrew Wang commented on HDFS-12131:


I've got a patch out already at HDFS-12206 to fix the replicated/ecblock 
naming, noticed that while looking at this earlier :) If you want to review, 
that'd be great.

My vote is to set a good example for the future and keep the names the same. 
Optionally, we can add new metrics with matched names and deprecate the 
mismatched ones. We could do that for StaleDataNodes here, and leave the rest 
for another patch.

> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104229#comment-16104229
 ] 

Hadoop QA commented on HDFS-11882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
8 unchanged - 0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879135/HDFS-11882.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 175ae5d78318 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3c7300 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20446/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| findbugs | 

[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104227#comment-16104227
 ] 

Andrew Wang commented on HDFS-12044:


+1 thanks Eddy, that checkstyle error looks extant.

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> ---
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch, HDFS-12044.04.patch, 
> HDFS-12044.05.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104217#comment-16104217
 ] 

Hudson commented on HDFS-12190:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12063/])
HDFS-12190. Enable 'hdfs dfs -stat' to display access time. Contributed 
(yzhang: rev c6330f22a5e5c2370bab885f9bea4bf8f5e9cf44)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch, HDFS-12190.005.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10614) Appended blocks can be closed even before IBRs from DataNodes

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104212#comment-16104212
 ] 

Hadoop QA commented on HDFS-10614:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-10614 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865892/HDFS-10614.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20453/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Appended blocks can be closed even before IBRs from DataNodes
> -
>
> Key: HDFS-10614
> URL: https://issues.apache.org/jira/browse/HDFS-10614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10614.01.patch, HDFS-10614.02.patch, 
> HDFS-10614.03.patch
>
>
> Scenario:
>1. Open the file for append()
>2. Trigger append pipeline setup by adding some data.
>3. Consider RECEIVING IBRs of DNs reaches NN first.
>4. updatePipeline() rpc sent to namenode to update the pipeline.
>5. Now, if complete() is called on the file even before closing the 
> pipeline, then block will be COMPLETE, even before block is actually 
> FINALIZED at DN side and file will be closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104214#comment-16104214
 ] 

Hadoop QA commented on HDFS-11187:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-11187 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840906/HDFS-11187.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20455/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize disk access for last partial chunk checksum of Finalized replica
> -
>
> Key: HDFS-11187
> URL: https://issues.apache.org/jira/browse/HDFS-11187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11187.001.patch
>
>
> The patch at HDFS-11160 ensures BlockSender reads the correct version of 
> metafile when there are concurrent writers.
> However, the implementation is not optimal, because it must always read the 
> last partial chunk checksum from disk while holding FsDatasetImpl lock for 
> every reader. It is possible to optimize this by keeping an up-to-date 
> version of last partial checksum in-memory and reduce disk access.
> I am separating the optimization into a new jira, because maintaining the 
> state of in-memory checksum requires a lot more work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104213#comment-16104213
 ] 

Hadoop QA commented on HDFS-10477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-10477 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10477 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817992/HDFS-10477.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20454/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 

[jira] [Commented] (HDFS-10967) Add configuration for BlockPlacementPolicy to avoid near-full DataNodes

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104209#comment-16104209
 ] 

Hadoop QA commented on HDFS-10967:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-10967 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832548/HDFS-10967.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20450/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add configuration for BlockPlacementPolicy to avoid near-full DataNodes
> ---
>
> Key: HDFS-10967
> URL: https://issues.apache.org/jira/browse/HDFS-10967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: balancer
> Attachments: HDFS-10967.00.patch, HDFS-10967.01.patch, 
> HDFS-10967.02.patch, HDFS-10967.03.patch
>
>
> Large production clusters are likely to have heterogeneous nodes in terms of 
> storage capacity, memory, and CPU cores. It is not always possible to 
> proportionally ingest data into DataNodes based on their remaining storage 
> capacity. Therefore it's possible for a subset of DataNodes to be much closer 
> to full capacity than the rest.
> This heterogeneity is most likely rack-by-rack -- i.e. _m_ whole racks of 
> low-storage nodes and _n_ whole racks of high-storage nodes. So It'd be very 
> useful if we can lower the chance for those near-full DataNodes to become 
> destinations for the 2nd and 3rd replicas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104210#comment-16104210
 ] 

Hadoop QA commented on HDFS-10348:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-10348 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801965/HDFS-10348-1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20452/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348-1.patch, HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Attachment: HDFS-11920-HDFS-7240.008.patch

Some of the failed tests are related, fixed in v008 patch. To any reviewer, 
v008 patch is almost identical to v007, only except for added the following 
change:
1. Changed the internal stream of OzoneInputStream from ChunkInputStream to 
ChunkGroupInputStream
2. added an entry to ozone-default.xml
3. KeyHandler's putKey will specify the size of the key based on the data.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10687) Federation Membership State Store internal API

2017-07-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104173#comment-16104173
 ] 

Chris Douglas commented on HDFS-10687:
--

I only skimmed the PB translation classes, assuming those are correct. Only a 
few minor questions:
* {{MembershipNamenodeResolver#getMembershipStore}} can't return a 
partially-constructed object accessed by multiple threads, because the service 
init guarantees that it only receives the instance created by the state store? 
Through {{getRegisteredRecordStores}}, this holds no locks. {{addRecordStore}} 
could include a check that the service is still in the init state and/or the 
recordStores map could be unmodifiable at the end of init.
* In {{MembershipStoreImpl}}, the {{activeRegistrations}} and 
{{expiredRegistrations}} fields are protected by a r/w lock, except in the 
heartbeat handling when it's used for logging. Do either of these need to be a 
{{ConcurrentHashMap}}?
* {{EphemeralBaseRecord}} is kind of a confusing name for that type, but I 
don't have a better suggestion...

The patch could update {{findbugsExcludeFile.xml}} to prevent it from flagging 
generated code (under dev-support).

> Federation Membership State Store internal API
> --
>
> Key: HDFS-10687
> URL: https://issues.apache.org/jira/browse/HDFS-10687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10467-HDFS-10687-001.patch, 
> HDFS-10687-HDFS-10467-002.patch, HDFS-10687-HDFS-10467-003.patch, 
> HDFS-10687-HDFS-10467-004.patch, HDFS-10687-HDFS-10467-005.patch, 
> HDFS-10687-HDFS-10467-006.patch
>
>
> The Federation Membership State encapsulates the information about the 
> Namenodes of each sub-cluster that are participating in Federation. The 
> information includes addresses for RPC, Web. This information is stored in 
> the State Store and later used by the Router to find data in the federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12190:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~jojochuang] again for the review!


> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch, HDFS-12190.005.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104153#comment-16104153
 ] 

Hadoop QA commented on HDFS-12044:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
186 unchanged - 0 fixed = 187 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12044 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879251/HDFS-12044.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 56ce65acd785 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e3c7300 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104119#comment-16104119
 ] 

Anu Engineer commented on HDFS-12034:
-

I have started another pre-commit build

https://builds.apache.org/blue/organizations/jenkins/PreCommit-HDFS-Build/detail/PreCommit-HDFS-Build/20448/pipeline

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104109#comment-16104109
 ] 

Anu Engineer commented on HDFS-12034:
-

I have just merged the trunk into HDFS-7240 branch. Hope that addresses the 
issue. [~aw] Thanks for your help, I really appreciate it. I will try to kick 
off another build to see if the merge helps.


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104103#comment-16104103
 ] 

Yongjun Zhang commented on HDFS-12190:
--

Thanks [~jojochuang] for the review, the failed tests are not related. Will 
commit shortly.


> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch, HDFS-12190.005.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-27 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-12151:
-
Attachment: HDFS-12151.004.patch

I apologize for the noise everyone, but since I can't reproduce the test 
failure locally the next couple of patches are experimental and at least this 
one will still fail. There's another layer of swallowed exceptions before it's 
getting as far as the write and I need to see where it's getting thrown...

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, 
> HDFS-12151.003.patch, HDFS-12151.004.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12072:
---
Labels: hdfs-ec-3.0-nice-to-have  (was: )

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104064#comment-16104064
 ] 

Hadoop QA commented on HDFS-12190:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
15s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 10 new + 198 unchanged 
- 3 fixed = 208 total (was 201) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12190 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879238/HDFS-12190.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d3ce0a567b9f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f4808c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20443/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 

[jira] [Commented] (HDFS-12060) Ozone: OzoneClient: Add list calls

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104055#comment-16104055
 ] 

Hadoop QA commented on HDFS-12060:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 0 unchanged - 0 fixed = 11 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879237/HDFS-12060-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ab5f642260ff 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5e47076 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104051#comment-16104051
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 2 
unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.TestOzoneClientImpl |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879242/HDFS-11920-HDFS-7240.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5570e04cb73e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104007#comment-16104007
 ] 

Hadoop QA commented on HDFS-12151:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.TestDataXceiverBackwardsCompat |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879233/HDFS-12151.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f1b7915b4f6b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4a85c6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20440/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20440/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20440/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20440/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 

[jira] [Updated] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12044:
-
Attachment: HDFS-12044.05.patch

Fix {{TestFileChecksum}}. The rest of tests pass on my machine.

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> ---
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch, HDFS-12044.04.patch, 
> HDFS-12044.05.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103974#comment-16103974
 ] 

Allen Wittenauer edited comment on HDFS-12034 at 7/27/17 9:54 PM:
--

bq. 904m 56s

That's... not great.

... and it looks like it was mostly stuck in the rat processing:

https://builds.apache.org/job/PreCommit-HDFS-Build/20429/artifact/patchprocess/patch-asflicense-root.txt

(Log cuts off because yetus was killed)

So yeah, upgrading to the new rat is really the first step.


was (Author: aw):
bq. 904m 56s

That's... not great.

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103974#comment-16103974
 ] 

Allen Wittenauer commented on HDFS-12034:
-

bq. 904m 56s

That's... not great.

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-07-27 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103972#comment-16103972
 ] 

Erik Krogen commented on HDFS-12131:


Hey [~andrew.wang], thanks for looking. Here I followed the example of 
{{StaleDataNodes}} since that was the most closely related one, but I agree 
with you that having the metric name match the MBean name makes more sense. 
There are a number of metrics in FSNamesystem that don't match the MBean 
though, even besides the Num prefix: LockQueueLength v FsLockQueueLength, all 
of the {{ReplicatedBlocksMBean}} ones, and all of the 
{{ECBlockGroupsStatsMBean}} ones. 

I don't really like that {{StaleDataNodes}} will be an outlier compared to the 
other XxxDataNodes metrics if I put the Num prefix in this patch, but otherwise 
I'm in support of it. If you'd prefer I put back in the Num prefix I'm fine 
with it.

> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103968#comment-16103968
 ] 

Hadoop QA commented on HDFS-12034:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
47s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 31s{color} 
| {color:red} root generated 553 new + 783 unchanged - 0 fixed = 1336 total 
(was 783) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  9s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}904m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}1053m 49s{color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Unread field:ServiceRuntimeInfo.java:[line 51] |
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
|   | hadoop.net.TestDNS |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.ksm.TestKSMMetrcis |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12034 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878792/HDFS-12034-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  

[jira] [Commented] (HDFS-12195) Ozone: DeleteKey-1: KSM replies delete key request asynchronously

2017-07-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103962#comment-16103962
 ] 

Chen Liang commented on HDFS-12195:
---

Appears to me that the idea of having {{KeySpaceManager#listKeys(keyPrefix, 
maxKeys)}} is that, we need a way to list all keys that start with a special 
prefix i.e. #deleting# for ALL volumes and buckets. While current listKeys are 
listing keys within a particular volume+bucket prefix, and requires to specify 
a volume and bucket. It also appears to me that the only reason why this is in 
KeySpaceManager is to help unit test. I think an alternative way to do unit 
test can be directly calling the metaStore's new {{listKeys(String keyPrefix, 
int maxKeys)}}, then we don't need this call from KeySpaceManager.

> Ozone: DeleteKey-1: KSM replies delete key request asynchronously
> -
>
> Key: HDFS-12195
> URL: https://issues.apache.org/jira/browse/HDFS-12195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: client-ksm.png, HDFS-12195-HDFS-7240.001.patch, 
> HDFS-12195-HDFS-7240.002.patch
>
>
> We will implement delete key in ozone in multiple child tasks, this is 1 of 
> the child task to implement client to scm communication. We need to do it in 
> async manner, once key state is changed in ksm metadata, ksm is ready to 
> reply client with a successful message. Actual deletes on other layers will 
> happen some time later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11576:
---
Labels:   (was: release-blocker)

Removing from release blockers. Wish we could fix it, but it should be fine to 
live with 3 sec deadline for block recovery, as we currently do. The deadline 
means that replication is restated after 3 sec, and eventually will be 
completed.

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2319) Add test cases for FSshell -stat

2017-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103902#comment-16103902
 ] 

Hudson commented on HDFS-2319:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12061 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12061/])
HDFS-2319. Add test cases for FSshell -stat. Contributed by XieXianshan 
(jitendra: rev e3c73002250a21a771689081b51764eca1d862a7)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


> Add test cases for FSshell -stat
> 
>
> Key: HDFS-2319
> URL: https://issues.apache.org/jira/browse/HDFS-2319
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: XieXianshan
>Assignee: Bharat Viswanadham
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-2319.02.patch, HDFS-2319.patch
>
>
> Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Attachment: HDFS-11920-HDFS-7240.007.patch

Post v007 patch to fix checkstyle.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103877#comment-16103877
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
2 unchanged - 1 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestOzoneClientImpl |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879221/HDFS-11920-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cf380b0f74a5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5e47076 |
| Default 

[jira] [Commented] (HDFS-12178) Ozone: OzoneClient: Handling SCM container creationFlag at client side

2017-07-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103876#comment-16103876
 ] 

Chen Liang commented on HDFS-12178:
---

Thanks [~nandakumar131] for the elaboration and updating the patch, checking 
{{CONTAINER_EXISTS}} LGTM, can we move 
{{containersCreated.add(containerName);}} to after the try-catch clause such 
that we don't need to call it two places? Also, for {{containersCreated}}, 
seems order of the container name doesn't matter, only their presence, so would 
be it better to change {{ArrayList}} to {{HashSet}} for the O(1) 
{{contains(...)}}?

> Ozone: OzoneClient: Handling SCM container creationFlag at client side
> --
>
> Key: HDFS-12178
> URL: https://issues.apache.org/jira/browse/HDFS-12178
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12178-HDFS-7240.000.patch, 
> HDFS-12178-HDFS-7240.001.patch, HDFS-12178-HDFS-7240.002.patch
>
>
> SCM BlockManager provisions a pool of containers upon block creation request, 
> only one container is returned with creationFlag to the client. The other 
> containers provisioned in the same batch will not have this flag. This jira 
> is to handle that scenario at client side, until  HDFS-11888 is fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103870#comment-16103870
 ] 

Hadoop QA commented on HDFS-10429:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10429 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832706/HDFS-10429.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a638c7c51bba 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f4808c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20442/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20442/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20442/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>

[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-07-27 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103867#comment-16103867
 ] 

Sean Mackrory commented on HDFS-11096:
--

Updating: it was a recent change that broke this, I've posted a patch to fix it 
that's being reviewed / iterated on, and I've updated my rolling upgrade test 
scripts to actually confirm via the Job History Server that the jobs themselves 
were FINISHED and SUCCESSFUL.

I re-ran the test with an early patch and I was able to get a successful 
rolling upgrade with 5-10 minute delays between each step. So the entire 
rolling upgrade of a 9-node (6 worker-node) cluster was spread out over 4 hours 
and I didn't encounter any other issues, EXCEPT: in my test workload, I had to 
increase Terasort's output replication, because some job failures were 
occasionally happening when a job wrote to a node that was about to be taken 
down for upgrades. I fixed that and no other actual compatibility issues in 
Hadoop were found. I'll push the fixes out to Github soon...

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2319) Add test cases for FSshell -stat

2017-07-27 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-2319:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

I have committed this to tunk. Thanks to [~xiexianshan] and [~bharatviswa] for 
the patches.

> Add test cases for FSshell -stat
> 
>
> Key: HDFS-2319
> URL: https://issues.apache.org/jira/browse/HDFS-2319
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: XieXianshan
>Assignee: Bharat Viswanadham
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-2319.02.patch, HDFS-2319.patch
>
>
> Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103858#comment-16103858
 ] 

Wei-Chiu Chuang commented on HDFS-12190:


LGTM +1 pending Jenkins. Thanks!

> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch, HDFS-12190.005.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2319) Add test cases for FSshell -stat

2017-07-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103860#comment-16103860
 ] 

Jitendra Nath Pandey edited comment on HDFS-2319 at 7/27/17 8:34 PM:
-

I have committed this to trunk. Thanks to [~xiexianshan] and [~bharatviswa] for 
the patches.


was (Author: jnp):
I have committed this to tunk. Thanks to [~xiexianshan] and [~bharatviswa] for 
the patches.

> Add test cases for FSshell -stat
> 
>
> Key: HDFS-2319
> URL: https://issues.apache.org/jira/browse/HDFS-2319
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: XieXianshan
>Assignee: Bharat Viswanadham
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-2319.02.patch, HDFS-2319.patch
>
>
> Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103857#comment-16103857
 ] 

Lei (Eddy) Xu commented on HDFS-12044:
--

The test failures seem to be relevant. Looking into it.

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> ---
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch, HDFS-12044.04.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12190:
-
Attachment: HDFS-12190.005.patch

Thanks [~jojochuang], good suggestion again. updated patch.


> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch, HDFS-12190.005.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12060) Ozone: OzoneClient: Add list calls

2017-07-27 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12060:
--
Status: Patch Available  (was: Open)

> Ozone: OzoneClient: Add list calls
> --
>
> Key: HDFS-12060
> URL: https://issues.apache.org/jira/browse/HDFS-12060
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12060-HDFS-7240.000.patch
>
>
> Support for {{listVolumes}}, {{listBuckets}}, {{listKeys}} in {{OzoneClient}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12060) Ozone: OzoneClient: Add list calls

2017-07-27 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103832#comment-16103832
 ] 

Nandakumar commented on HDFS-12060:
---

Initial version of patch uploaded.
All the list calls return a generic {{ResourceIterator}} using which we can 
iterate over the list of volume/bucket/keys.

> Ozone: OzoneClient: Add list calls
> --
>
> Key: HDFS-12060
> URL: https://issues.apache.org/jira/browse/HDFS-12060
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12060-HDFS-7240.000.patch
>
>
> Support for {{listVolumes}}, {{listBuckets}}, {{listKeys}} in {{OzoneClient}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12060) Ozone: OzoneClient: Add list calls

2017-07-27 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12060:
--
Attachment: HDFS-12060-HDFS-7240.000.patch

> Ozone: OzoneClient: Add list calls
> --
>
> Key: HDFS-12060
> URL: https://issues.apache.org/jira/browse/HDFS-12060
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12060-HDFS-7240.000.patch
>
>
> Support for {{listVolumes}}, {{listBuckets}}, {{listKeys}} in {{OzoneClient}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103820#comment-16103820
 ] 

Hudson commented on HDFS-11896:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12059 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12059/])
HDFS-11896. Non-dfsUsed will be doubled on dead node re-registration. (shv: rev 
c4a85c694fae3f814ab4e7f3c172da1df0e0e353)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java


> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.9.0, 2.7.4, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-008.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896-branch-2.7-008.patch, 
> HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103818#comment-16103818
 ] 

Hadoop QA commented on HDFS-11896:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
55s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 264 unchanged - 0 fixed = 265 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 64 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| JDK v1.7.0_131 Failed junit tests | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:67e87c9 |
| JIRA Issue | HDFS-11896 |
| JIRA Patch URL | 

[jira] [Resolved] (HDFS-10799) NameNode should use loginUser(hdfs) to serve iNotify requests

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-10799.

Resolution: Won't Fix

Close this jira because the proposed solution does not seem appropriate. As I 
explained earlier, the correct fix for this problem should be at the client 
side, which is supposed to renew Kerberos credential before it expires.

> NameNode should use loginUser(hdfs) to serve iNotify requests
> -
>
> Key: HDFS-10799
> URL: https://issues.apache.org/jira/browse/HDFS-10799
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: Kerberized, HA cluster, iNotify client, CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10799.001.patch
>
>
> When a NameNode serves iNotify requests from a client, it verifies the client 
> has superuser permission and then uses the client's Kerberos principal to 
> read edits from journal nodes.
> However, if the client does not renew its tgt tickets, the connection from 
> NameNode to journal nodes may fail. In which case, the NameNode thinks the 
> edits are corrupt, and prints a scary error message:
> "During automatic edit log failover, we noticed that all of the remaining 
> edit log streams are shorter than the current one!  The best remaining edit 
> log ends at transaction 11577603, but we thought we could read up to 
> transaction 11577606.  If you continue, metadata will be lost forever!"
> However, the edits are actually good. NameNode _should not freak out when an 
> iNotify client's tgt ticket expires_.
> I think that an easy solution to this bug, is that after NameNode verifies 
> client has superuser permission, call {{SecurityUtil.doAsLoginUser}} and then 
> read edits. This will make sure the operation does not fail due to an expired 
> client ticket.
> Excerpt of related logs:
> {noformat}
> 2016-08-18 19:05:13,979 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:h...@example.com (auth:KERBEROS) 
> cause:java.io.IOException: We encountered an error reading 
> http://jn1.example.com:8480/getJournal?jid=nameservice1=11577487=yyy,
>  
> http://jn1.example.com:8480/getJournal?jid=nameservice1=11577487=yyy.
>   During automatic edit log failover, we noticed that all of the remaining 
> edit log streams are shorter than the current one!  The best remaining edit 
> log ends at transaction 11577603, but we thought we could read up to 
> transaction 11577606.  If you continue, metadata will be lost forever!
> 2016-08-18 19:05:13,979 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 112 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getEditsFromTxid from [client 
> IP:port] Call#73 Retry#0
> java.io.IOException: We encountered an error reading 
> http://jn1.example.com:8480/getJournal?jid=nameservice1=11577487=yyy,
>  
> http://jn1.example.com:8480/getJournal?jid=nameservice1=11577487=yyy.
>   During automatic edit log failover, we noticed that all of the remaining 
> edit log streams are shorter than the current one!  The best remaining edit 
> log ends at transaction 11577603, but we thought we could read up to 
> transaction 11577606.  If you continue, metadata will be lost forever!
> at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.readOp(NameNodeRpcServer.java:1674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEditsFromTxid(NameNodeRpcServer.java:1736)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEditsFromTxid(AuthorizationProviderProxyClientProtocol.java:1010)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEditsFromTxid(ClientNamenodeProtocolServerSideTranslatorPB.java:1475)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> 

[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103812#comment-16103812
 ] 

Wei-Chiu Chuang commented on HDFS-12190:


Checkstyle warning can be ignored due to existing indentation.

I think the test is good, but it shouldn't need to sleep before getting t1 and 
t2. How about setting t1 = Time.now() + 3000 and t2 = Time.now() + 6000? This 
way the test can complete faster.

Thanks.

> Enable 'hdfs dfs -stat' to display access time
> --
>
> Key: HDFS-12190
> URL: https://issues.apache.org/jira/browse/HDFS-12190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, shell
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12190.001.patch, HDFS-12190.002.patch, 
> HDFS-12190.003.patch, HDFS-12190.004.patch
>
>
> "hdfs dfs -stat" currently only can show modification time of a file but not 
> access time. Sometimes it's useful to show access time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-27 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-12151:
-
Attachment: HDFS-12151.003.patch

Attaching a patch with the checkstyle issues fixed and also logging a stack 
trace for exceptions that happen earlier than required. I tried running 
parallel tests locally and didn't have a problem, but many other tests are 
failing because they think LOG fields are missing (but in the code, they're not 
- working on it). Also had a clean Yetus run locally, so I may be missing some 
config or something.

I don't want to handle RuntimeExceptions differently because it's  NPE that we 
receive in the case of the bug I'm fixing, and it's an NPE that we receive 
after data has been sent to the server because I haven't mocked everything. So 
if we receive an NPE before data is sent to the server, I'd like to treat it 
the same as any other exception and fail if it's too early.

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, 
> HDFS-12151.003.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12206) Rename the split EC / replicated block metrics

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103791#comment-16103791
 ] 

Hadoop QA commented on HDFS-12206:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12206 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879208/HDFS-12206.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e118307f54d9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27a1a5f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20437/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20437/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20437/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20437/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rename the split EC / replicated block metrics
> --
>
> Key: HDFS-12206
> URL: https://issues.apache.org/jira/browse/HDFS-12206
> Project: Hadoop HDFS
>  

[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103787#comment-16103787
 ] 

Hadoop QA commented on HDFS-12044:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
26s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
186 unchanged - 0 fixed = 188 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12044 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878902/HDFS-12044.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db232ee595db 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Resolved] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-11896.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.3
   3.0.0-beta1
   2.7.4
   2.9.0

I just committed this to trunk, and branches 2, 2.8, and 2.7.
Thank you [~brahmareddy] and [~zhz].

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.9.0, 2.7.4, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-008.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896-branch-2.7-008.patch, 
> HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11896:
---
Labels:   (was: release-blocker)

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.9.0, 2.7.4, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-008.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896-branch-2.7-008.patch, 
> HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12195) Ozone: DeleteKey-1: KSM replies delete key request asynchronously

2017-07-27 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103783#comment-16103783
 ] 

Nandakumar commented on HDFS-12195:
---

Thanks for working on this [~yuanbo].
Apart for the nits that Anu has mentioned, is there any reason for having 
{{KeySpaceManager#listKeys(keyPrefix, maxKeys)}}? I don't see any need for 
having this method in KeySpaceManager.

> Ozone: DeleteKey-1: KSM replies delete key request asynchronously
> -
>
> Key: HDFS-12195
> URL: https://issues.apache.org/jira/browse/HDFS-12195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: client-ksm.png, HDFS-12195-HDFS-7240.001.patch, 
> HDFS-12195-HDFS-7240.002.patch
>
>
> We will implement delete key in ozone in multiple child tasks, this is 1 of 
> the child task to implement client to scm communication. We need to do it in 
> async manner, once key state is changed in ksm metadata, ksm is ready to 
> reply client with a successful message. Actual deletes on other layers will 
> happen some time later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103759#comment-16103759
 ] 

Anu Engineer commented on HDFS-12034:
-

bq. This patch needs some work on LICENSE and/or NOTICE before it can be 
committed given that there are several new licenses and copyrights in use.

[~elek] Would you be able to add the required information to LICENSE and NOTICE 
files.
We will wait until HADOOP-14692 is committed and I will cherry-pick that to 
HDFS-7240.

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12190) Enable 'hdfs dfs -stat' to display access time

2017-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103743#comment-16103743
 ] 

Hadoop QA commented on HDFS-12190:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 10 new + 198 unchanged 
- 3 fixed = 208 total (was 201) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.shell.TestCopyFromLocal |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12190 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879192/HDFS-12190.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b3d63dd98d55 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27a1a5f |
| Default Java | 

[jira] [Updated] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11896:
---
Attachment: HDFS-11896-branch-2.7-008.patch
HDFS-11896-008.patch

Updated both patches. Removed now unused variable {{nonDFS}}

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-008.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896-branch-2.7-008.patch, 
> HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-11896:
---
Status: Open  (was: Patch Available)

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103738#comment-16103738
 ] 

Anu Engineer commented on HDFS-12034:
-

[~aw] Thank you, really appreciate your help here. 

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103734#comment-16103734
 ] 

Andrew Wang commented on HDFS-10285:


Hi folks, I gave the patch a really quick skim, only have two questions for now:

* Should dfs.storage.policy.satisfier.activate default to false for now? Might 
also rename this to "enabled" rather than "activate" to align with other 
previous config keys.
* What happens during a rolling upgrade? Will DNs ignore the unknown message, 
and NN handle this correctly? On downgrade, I assume the xattrs just stay there 
ignored.


> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12208) NN should consider DataNode#xmitInProgress when placing new block

2017-07-27 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12208:


 Summary: NN should consider DataNode#xmitInProgress when placing 
new block
 Key: HDFS-12208
 URL: https://issues.apache.org/jira/browse/HDFS-12208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: block placement, erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


As discussed in HDFS-12044, NN only considers xceiver counts on DN when placing 
new blocks. NN should also consider background reconstruction works, presented 
by xmits on DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-07-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Attachment: HDFS-11920-HDFS-7240.006.patch

Rebased with v006 patch.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103664#comment-16103664
 ] 

Konstantin Shvachko edited comment on HDFS-11896 at 7/27/17 6:42 PM:
-

Actually let me just commit [~zhz]'s patch, tahnks. +1
Will update the trunk patch accordingly and commit.


was (Author: shv):
Yes the test should compare Namesystem stats after registration rather than the 
sum of DatanodeDescriptors. But with the same value before registration.
Actually it worth adding both checks. Let me add more asserts.
Will update the trunk patch accordingly and commit.

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration

2017-07-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103664#comment-16103664
 ] 

Konstantin Shvachko commented on HDFS-11896:


Yes the test should compare Namesystem stats after registration rather than the 
sum of DatanodeDescriptors. But with the same value before registration.
Actually it worth adding both checks. Let me add more asserts.
Will update the trunk patch accordingly and commit.

> Non-dfsUsed will be doubled on dead node re-registration
> 
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch, 
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch, 
> HDFS-11896-007.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896-branch-2.7-003.patch, 
> HDFS-11896-branch-2.7-004.patch, HDFS-11896-branch-2.7-005.patch, 
> HDFS-11896-branch-2.7-006.patch, HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12206) Rename the split EC / replicated block metrics

2017-07-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103651#comment-16103651
 ] 

Lei (Eddy) Xu commented on HDFS-12206:
--

This patch seems to overlap with HDFS-10999.

> Rename the split EC / replicated block metrics
> --
>
> Key: HDFS-12206
> URL: https://issues.apache.org/jira/browse/HDFS-12206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12206.001.patch
>
>
> Going through the split EC/replicated metrics, I think it'd be better to name 
> the replicated-only metrics with "ReplicatedBlocks" rather than "BlocksStat" 
> for clarity. "Stat" is also not a very descriptive name, so remove it for the 
> EC blocks as well. Finally, fix some inconsistencies that were missed earlier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12201) INode#getSnapshotINode() should get INodeAttributes from INodeAttributesProvider for the current INode

2017-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103649#comment-16103649
 ] 

Yongjun Zhang commented on HDFS-12201:
--

Thanks [~manojg] for working on this issue, and [~daryn] for commenting.

HI [~daryn],

{quote}
 One of the worst cases may be the NN inadvertently edit log the "fake" 
attributes.
{quote}
This is the issue that I try to solve with HDFS-12202: when we distcp from 
srcCluster to dstCluter, if external attribute provider is enabled in 
srcCluster, distcp would copy data from srcCluster's external attribute 
provider and save to tgtCluster's edit log and fsimage. The solution proposed 
in HDFS-12202 is to add new set of APIs to bypass external attribute provider 
when reading metadata, so distcp could use this set of API.

However, the change proposed in HDFS-12202 might be too disruptive because 1. 
it adds new API to stable FileSystem interface, 2. all downstream code will 
need to implement the APIs. But I don't see better/cleaner solution at this 
point. Would appreciate if you share your thoughts and comments there.

Thanks.
 

> INode#getSnapshotINode() should get INodeAttributes from 
> INodeAttributesProvider for the current INode
> --
>
> Key: HDFS-12201
> URL: https://issues.apache.org/jira/browse/HDFS-12201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.8.0
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12201.test.01.patch
>
>
> Problem: When an external INodeAttributesProvider is enabled, SnapshotDiff is 
> not detecting changes in files when the external ACL/XAttr attributes change. 
> {{FileWithSnapshotFeature#changedBetweenSnapshots()}} when trying to detect 
> changes in snapshots for the given file, does meta data comparison which 
> takes in the attributes retrieved from {{INode#getSnapshotINode()}}
> {{INodeFile}}
> {noformat}
>   @Override
>   public INodeFileAttributes getSnapshotINode(final int snapshotId) {
> FileWithSnapshotFeature sf = this.getFileWithSnapshotFeature();
> if (sf != null) {
>   return sf.getDiffs().getSnapshotINode(snapshotId, this);
> } else {
>   return this;
> }
>   }
> {noformat}
> {{AbstractINodeDiffList#getSnapshotINode}}
> {noformat}
>   public A getSnapshotINode(final int snapshotId, final A currentINode) {
> final D diff = getDiffById(snapshotId);
> final A inode = diff == null? null: diff.getSnapshotINode();
> return inode == null? currentINode: inode;
>   }
> {noformat}
> But, INodeFile, INodeDirectory #getSnapshotINode() returns the current 
> INode's local INodeAttributes if there is anything available for the given 
> snapshot id. When there is an INodeAttributesProvider configured, attributes 
> provided by the external provider could be different from the local. But, 
> getSnapshotINode() always returns the local attributes without retrieving 
> them from attributes provider. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103631#comment-16103631
 ] 

Andrew Wang commented on HDFS-12044:


LGTM +1, I just retriggered the precommit build since I don't see a run on the 
04 patch.

Could you link the follow-on JIRAs to this one? I might have missed them. 
Thanks Eddy!

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> ---
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch, HDFS-12044.04.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103610#comment-16103610
 ] 

Andrew Wang commented on HDFS-12151:


Thanks Sean! Looks like the checkstyles could be fixed, and the new test failed 
with an NPE. In the past, I've had issues with the thread-safety of mocking, 
which might be biting us here too. Changing the catch to first separately catch 
RuntimeException and re-throw would give us more detail on the NPE too.

Given Ewan's response, it sounds like it's worth cleaning up some other things 
in this code too. I filed HDFS-12207 to track these separately, since they're 
less urgent than this change.

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12207) A few DataXceiver#writeBlock cleanups related to optional storage IDs and types

2017-07-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12207:
--

 Summary: A few DataXceiver#writeBlock cleanups related to optional 
storage IDs and types
 Key: HDFS-12207
 URL: https://issues.apache.org/jira/browse/HDFS-12207
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0-alpha4
Reporter: Andrew Wang


Here's the conversation that [~ehiggs] and I had on HDFS-12151 regarding some 
improvements:

bq. Should we use nst > 0 rather than targetStorageTypes.length > 0 (amended) 
here for clarity?
Yes.
bq. Should the targetStorageTypes.length > 0 check really be nsi > 0? We could 
elide it then since it's already captured in the outside if.
This does look redundant since targetStorageIds.length will be either 0 or == 
targetStorageTypes.length
bq. Finally, I don't understand why we need to add the targeted ID/type for 
checkAccess. Each DN only needs to validate itself, yea? BTSM#checkAccess 
indicates this in its javadoc, but it looks like we run through ourselves and 
the targets each time:
That seems like a good simplification. I think I had assumed the BTI and 
requested types being checked should be the same (String - String, uint64 - 
uint64); but I don't see a reason why they have to be. Chris Douglas, what do 
you think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103601#comment-16103601
 ] 

Allen Wittenauer commented on HDFS-12034:
-

Applying HADOOP-14692 + this patch on top of HDFS-7240 makes apache-rat:check 
work locally:

{code}
Files with unapproved licenses:

  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-1.6.4.min.js
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-nvd3-1.0.9.min.js
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-route-1.6.4.min.js
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-3.5.17.min.js
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/nvd3-1.8.5.min.css
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/nvd3-1.8.5.min.css.map
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/nvd3-1.8.5.min.js
  
/Users/aw/shared-vmware/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/nvd3-1.8.5.min.js.map
{code}

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9786) HttpFS doesn't write the proxyuser information in logfile

2017-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103594#comment-16103594
 ] 

Wei-Chiu Chuang commented on HDFS-9786:
---

Hello [~heesoo] thanks for filing this. Do you still plan to work on this jira?
I'd like to take a stab at it if you don't mind.

> HttpFS doesn't write the proxyuser information in logfile
> -
>
> Key: HDFS-9786
> URL: https://issues.apache.org/jira/browse/HDFS-9786
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: HeeSoo Kim
>Assignee: HeeSoo Kim
>
> According to the httpfs-log4j.properties, the log pattern indicates that
> {code}
> log4j.appender.httpfsaudit.layout.ConversionPattern=%d{ISO8601} %5p 
> [%X{hostname}][%X{user}:%X{doAs}] %X{op} %m%n
> {code}
> However, the httpfsaudit doesn't write right information for user and 
> proxyuser information. It is better to write ugi on audit log in HttpFS GW.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103593#comment-16103593
 ] 

Allen Wittenauer commented on HDFS-12034:
-

Also, let's get HADOOP-14692 in before we spend a lot of time on debugging why 
the rat check is taking a while. (I have a hunch what's wrong and if that hunch 
is correct we're upgrading anyway.)

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12059) Ozone: OzoneClient: OzoneClientImpl Add setBucketProperty and delete calls

2017-07-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12059:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone: OzoneClient: OzoneClientImpl Add setBucketProperty and delete calls
> --
>
> Key: HDFS-12059
> URL: https://issues.apache.org/jira/browse/HDFS-12059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12059-HDFS-7240.000.patch
>
>
> Support for following API's in {{OzoneClientImpl}}
> * {{addBucketAcls}}
> * {{removeBucketAcls}}
> * {{setBucketVersioning}}
> * {{setBucketStorageType}}
> * {{getKey}}
> * {{deleteVolume}}
> * {{deleteBucket}}
> * {{deleteKey}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6939) Support path-based filtering of inotify events

2017-07-27 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-6939:
-
Status: Open  (was: Patch Available)

> Support path-based filtering of inotify events
> --
>
> Key: HDFS-6939
> URL: https://issues.apache.org/jira/browse/HDFS-6939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode, qjm
>Reporter: James Thomas
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-6939-001.patch
>
>
> Users should be able to specify that they only want events involving 
> particular paths.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12059) Ozone: OzoneClient: OzoneClientImpl Add setBucketProperty and delete calls

2017-07-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103579#comment-16103579
 ] 

Chen Liang commented on HDFS-12059:
---

+1 on v000 patch, the failed tests are unrelated. committed to the feature 
branch, thanks [~nandakumar131] for the contribution!

> Ozone: OzoneClient: OzoneClientImpl Add setBucketProperty and delete calls
> --
>
> Key: HDFS-12059
> URL: https://issues.apache.org/jira/browse/HDFS-12059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12059-HDFS-7240.000.patch
>
>
> Support for following API's in {{OzoneClientImpl}}
> * {{addBucketAcls}}
> * {{removeBucketAcls}}
> * {{setBucketVersioning}}
> * {{setBucketStorageType}}
> * {{getKey}}
> * {{deleteVolume}}
> * {{deleteBucket}}
> * {{deleteKey}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12206) Rename the split EC / replicated block metrics

2017-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12206:
---
Labels: hdfs-ec-3.0-must-do  (was: )

> Rename the split EC / replicated block metrics
> --
>
> Key: HDFS-12206
> URL: https://issues.apache.org/jira/browse/HDFS-12206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12206.001.patch
>
>
> Going through the split EC/replicated metrics, I think it'd be better to name 
> the replicated-only metrics with "ReplicatedBlocks" rather than "BlocksStat" 
> for clarity. "Stat" is also not a very descriptive name, so remove it for the 
> EC blocks as well. Finally, fix some inconsistencies that were missed earlier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12206) Rename the split EC / replicated block metrics

2017-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12206:
---
Hadoop Flags: Incompatible change
  Status: Patch Available  (was: Open)

> Rename the split EC / replicated block metrics
> --
>
> Key: HDFS-12206
> URL: https://issues.apache.org/jira/browse/HDFS-12206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12206.001.patch
>
>
> Going through the split EC/replicated metrics, I think it'd be better to name 
> the replicated-only metrics with "ReplicatedBlocks" rather than "BlocksStat" 
> for clarity. "Stat" is also not a very descriptive name, so remove it for the 
> EC blocks as well. Finally, fix some inconsistencies that were missed earlier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12206) Rename the split EC / replicated block metrics

2017-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12206:
---
Attachment: HDFS-12206.001.patch

Patch attached. This fixes up the naming of the public interfaces.

These changes should also be propagated down into BlockManager and classes like 
LowRedundancyBlocks and InvalidateBlocks as well, but that's another big 
renaming pass. I notice that BM also doesn't respect the same naming 
conventions as FSN wrt what's an aggregate, replicated-only, EC-only statistic. 
There's also some inaccuracy wrt "ECBlocks" vs. "ECBlockGroups" in some 
variable names. I'd like to fix these separately, doing behavior changes 
especially in separate JIRAs.

> Rename the split EC / replicated block metrics
> --
>
> Key: HDFS-12206
> URL: https://issues.apache.org/jira/browse/HDFS-12206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12206.001.patch
>
>
> Going through the split EC/replicated metrics, I think it'd be better to name 
> the replicated-only metrics with "ReplicatedBlocks" rather than "BlocksStat" 
> for clarity. "Stat" is also not a very descriptive name, so remove it for the 
> EC blocks as well. Finally, fix some inconsistencies that were missed earlier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12202) Provide new set of FileSystem API to bypass external attribute provider

2017-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103567#comment-16103567
 ] 

Yongjun Zhang commented on HDFS-12202:
--

Besides that the change is going to be very wide,  I wonder if adding new API 
to FileSystem would be too disruptive and cause any use case to break. Given 
FileSystem is both public and stable.

{code}
@InterfaceAudience.Public
@InterfaceStability.Stable
public abstract class FileSystem extends Configured implements Closeable {
{code}

Would really appreciate if any one can comment, [~daryn] and other folks!

Thanks.



> Provide new set of FileSystem API to bypass external attribute provider
> ---
>
> Key: HDFS-12202
> URL: https://issues.apache.org/jira/browse/HDFS-12202
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> HDFS client uses 
> {code}
>   /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus getFileStatus(Path f) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f given path
>* @return the statuses of the files/directories in the given patch
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus[] listStatus(Path f) throws 
> FileNotFoundException,
>  IOException;
> {code}
> to get FileStatus of files.
> When external attribute provider (INodeAttributeProvider) is enabled for a 
> cluster, the  external attribute provider is consulted to get back some 
> relevant info (including ACL, group etc) and returned back in FileStatus, 
> There is a problem here, when we use distcp to copy files from srcCluster to 
> tgtCluster, if srcCluster has external attribute provider enabled, the data 
> we copied would contain data from attribute provider, which we may not want.
> Create this jira to add a new set of interface for distcp to use, so that 
> distcp can copy HDFS data only and bypass external attribute provider data.
> The new set API would look like
> {code}
>  /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public FileStatus getFileStatus(Path f,
>   final boolean bypassExtAttrProvider) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return
>* @throws FileNotFoundException
>* @throws IOException
>*/
>   public FileStatus[] listStatus(Path f,
>   final boolean bypassExtAttrProvider) throws FileNotFoundException,
>   IOException;
> {code}
> So when bypassExtAttrProvider is true, external attribute provider will be 
> bypassed.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >