[jira] [Updated] (HDFS-12725) BlockPlacementPolicyRackFaultTolerant still fails with racks with very few nodes

2017-10-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12725:
-
Attachment: HDFS-12725.04.patch

Patch 4 to fix the checkstyles that's relevant. Method with 7+ parameters is 
not my favorite, but that is consistent with other calls inside the bpp.

> BlockPlacementPolicyRackFaultTolerant still fails with racks with very few 
> nodes
> 
>
> Key: HDFS-12725
> URL: https://issues.apache.org/jira/browse/HDFS-12725
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12725.01.patch, HDFS-12725.02.patch, 
> HDFS-12725.03.patch, HDFS-12725.04.patch
>
>
> HDFS-12567 tries to fix the scenario where EC blocks may not be allocated in 
> extremely rack-imbalanced cluster.
> The added fall-back step of the fix could be improved to do a best-effort 
> placement. This is more likely to happen in testing than in real clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-10-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12482:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews [~xiaochen]. Committed to trunk and 3.0.0. 

> Provide a configuration to adjust the weight of EC recovery tasks to adjust 
> the speed of recovery
> -
>
> Key: HDFS-12482
> URL: https://issues.apache.org/jira/browse/HDFS-12482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0
>
> Attachments: HDFS-12482.00.patch, HDFS-12482.01.patch, 
> HDFS-12482.02.patch, HDFS-12482.03.patch, HDFS-12482.04.patch, 
> HDFS-12482.05.patch
>
>
> The relative speed of EC recovery comparing to 3x replica recovery is a 
> function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 
> Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
> sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
> uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
> DataNode this we can add a coefficient for user to tune the weight of EC 
> recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12682:
-
Attachment: HDFS-12682.07.patch

Patch 7 to fix checkstyle. Test failures look unrelated.

> ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as 
> DISABLED
> 
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch, 
> HDFS-12682.03.patch, HDFS-12682.04.patch, HDFS-12682.05.patch, 
> HDFS-12682.06.patch, HDFS-12682.07.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16233559#comment-16233559
 ] 

Hadoop QA commented on HDFS-12682:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 4 new + 647 unchanged 
- 2 fixed = 651 total (was 649) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 32s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}296m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
| Timed out junit tests | org.apache.hadoop.mapred.pipes.TestPipeApplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895063/HDFS-12682.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 82230abc7fa2 3.13.0-129-generic #178-Ubuntu 

[jira] [Commented] (HDFS-12714) Hadoop 3 missing fix for HDFS-5169

2017-10-31 Thread Joe McDonnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16233549#comment-16233549
 ] 

Joe McDonnell commented on HDFS-12714:
--

[~jzhuge]
The test case that spurred the creation of this JIRA is an Impala test case for 
Impala's fallback code when a zero copy HDFS read fails. 
It does this by mixing zero copy reads with encryption zones, which is a known 
error case (see IMPALA-3679). That error causes libhdfs to call 
translateZCRException, which without this patch was failing and crashing 
Impala. 

I tested this patch by building libhdfs and running the same test case. It 
succeeds where it was previously crashing.

> Hadoop 3 missing fix for HDFS-5169
> --
>
> Key: HDFS-12714
> URL: https://issues.apache.org/jira/browse/HDFS-12714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2, 3.0.0-alpha4, 
> 3.0.0-alpha3
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
>Priority: Major
> Attachments: HDFS-12714.001.patch
>
>
> HDFS-5169 is a fix for a null pointer dereference in translateZCRException. 
> This line in hdfs.c:
> ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> should be:
> ret = printExceptionAndFree(env, exc, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> Plainly, translateZCRException should print the exception (exc) passed in to 
> the function rather than the uninitialized local jthr.
> The fix for HDFS-5169 (part of HDFS-4949) exists on hadoop 2.* branches, but 
> it is missing on hadoop 3 branches including trunk.
> Hadoop 2.8:
> https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2514
> Hadoop 3.0:
> https://github.com/apache/hadoop/blob/branch-3.0/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2691



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12714) Hadoop 3 missing fix for HDFS-5169

2017-10-31 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227809#comment-16227809
 ] 

John Zhuge commented on HDFS-12714:
---

+1 LGTM  I will commit once the testing done is clarified.

> Hadoop 3 missing fix for HDFS-5169
> --
>
> Key: HDFS-12714
> URL: https://issues.apache.org/jira/browse/HDFS-12714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2, 3.0.0-alpha4, 
> 3.0.0-alpha3
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
> Attachments: HDFS-12714.001.patch
>
>
> HDFS-5169 is a fix for a null pointer dereference in translateZCRException. 
> This line in hdfs.c:
> ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> should be:
> ret = printExceptionAndFree(env, exc, PRINT_EXC_ALL, "hadoopZeroCopyRead: 
> ZeroCopyCursor#read failed");
> Plainly, translateZCRException should print the exception (exc) passed in to 
> the function rather than the uninitialized local jthr.
> The fix for HDFS-5169 (part of HDFS-4949) exists on hadoop 2.* branches, but 
> it is missing on hadoop 3 branches including trunk.
> Hadoop 2.8:
> https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2514
> Hadoop 3.0:
> https://github.com/apache/hadoop/blob/branch-3.0/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L2691



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227805#comment-16227805
 ] 

Hadoop QA commented on HDFS-12681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 55s{color} | {color:orange} root: The patch generated 3 new + 381 unchanged 
- 8 fixed = 384 total (was 389) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | 

[jira] [Commented] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227786#comment-16227786
 ] 

Xiao Chen commented on HDFS-12682:
--

Thanks Eddy! If there is no objections I'd like to commit this in 24 hours.

[~Sammi] [~drankye] [~rakeshr] [~andrew.wang] FYI

> ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as 
> DISABLED
> 
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch, 
> HDFS-12682.03.patch, HDFS-12682.04.patch, HDFS-12682.05.patch, 
> HDFS-12682.06.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.012.patch

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch, 
> HDFS-11902-HDFS-9806.012.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Comment: was deleted

(was: The failed test cases are unrelated to the patch, so are the ASF 
warnings. Will commit patch v011 to HDFS-9806 feature branch soon.)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227772#comment-16227772
 ] 

Virajith Jalaparti commented on HDFS-11902:
---

The failed test cases are unrelated to the patch, so are the ASF warnings. Will 
commit patch v011 to HDFS-9806 feature branch soon.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227727#comment-16227727
 ] 

Hadoop QA commented on HDFS-12549:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m 
47s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
44s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestFileCorruption |
|   | 

[jira] [Commented] (HDFS-12474) Ozone: SCM: Handling container report with key count and container usage.

2017-10-31 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227716#comment-16227716
 ] 

Nanda kumar commented on HDFS-12474:


Follow up jiras
* HDFS-12751 - Ozone: SCM: update container allocated size to container db for 
all the open containers in ContainerStateManager#close
* HDFS-12752 - Ozone: SCM: Make container report processing asynchronous

> Ozone: SCM: Handling container report with key count and container usage.
> -
>
> Key: HDFS-12474
> URL: https://issues.apache.org/jira/browse/HDFS-12474
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Nanda kumar
>  Labels: ozoneMerge
> Attachments: HDFS-12474-HDFS-7240.000.patch, 
> HDFS-12474-HDFS-7240.001.patch
>
>
> Currently, the container report only contains the # of reports sent to SCM. 
> We will need to provide the key count and the usage of each individual 
> containers to update the SCM container state maintained by 
> ContainerStateManager. This has a dependency on HDFS-12387.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12752) Ozone: SCM: Make container report processing asynchronous

2017-10-31 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12752:
--

 Summary: Ozone: SCM: Make container report processing asynchronous
 Key: HDFS-12752
 URL: https://issues.apache.org/jira/browse/HDFS-12752
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar


{{StorageContainerManager#sendContainerReport}} processes the container reports 
sent by datanodes, this calls {{ContainerMapping#processContainerReports}} to 
do the actual processing. This jira is to make 
{{ContainerMapping#processContainerReports}} call asynchronous.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227707#comment-16227707
 ] 

Hadoop QA commented on HDFS-9240:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 29s{color} | {color:orange} root: The patch generated 20 new + 530 unchanged 
- 44 fixed = 550 total (was 574) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 41s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
6s{color} | {color:green} hadoop-gridmix in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
45s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
39s{color} | {color:red} 

[jira] [Created] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-10-31 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12751:
--

 Summary: Ozone: SCM: update container allocated size to container 
db for all the open containers in ContainerStateManager#close
 Key: HDFS-12751
 URL: https://issues.apache.org/jira/browse/HDFS-12751
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar


Container allocated size is maintained in memory by {{ContainerStateManager}}, 
this has to be updated in container db when we shutdown SCM. 
{{ContainerStateManager#close}} will be called during SCM shutdown, so updating 
allocated size for all the open containers should be done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12474) Ozone: SCM: Handling container report with key count and container usage.

2017-10-31 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227704#comment-16227704
 ] 

Nanda kumar commented on HDFS-12474:


Thanks [~linyiqun] & [~xyao] for the review. Uploaded patch v001 addressing the 
review comments.

>> line 230: Why not use {{float}} type value for the threshold-percentage 
>> setting?
Changed.

>> line 375: {{(containerInfo.getUsed() / containerInfo.getSize())}} should 
>> be...
Fixed.

>> line 369: ContainerInfo.lastUsed doesn't be set here.
We are setting lastUsed in {{ContainerInfo#Constructor}}, line 58 
{{this.lastUsed = Time.monotonicNow()}}. We don't expose {{lastUsed}} through 
constructor argument or setter, this is to avoid someone setting it to past 
values. Updating of lastUsed is always done with {{Time.monotonicNow()}}.

>> Need to update the reference to BlockContainerInfo since the class is 
>> removed with the patch. Otherwise, maven failed to build.
Thanks for the catch, I don't know how it got missed.

>> NIT: Line 27-33 unused imports
I don't see any unused import in ContainerMapping.java, please let me know if 
I'm missing something.

>> if we always update the allocated based on the reported usage, should we 
>> restrict the size to be smaller than The maximum container size (5gb)?
This is done in container report processing because in case of SCM crash the 
allocated size will not be properly updated by {{ContainerStateManager}} 
\[ContainerStateManager#close - will create a jira for this\], during next 
start-up of SCM we don't want the allocated size to be very old (previous 
successful shutdown of SCM). This update is done as a fallback mechanism in 
case SCM crashes without properly updating allocated size, this is not a 
correct allocated size.
In theory allocated size can exceed the container size, so I don't think we 
should restrict the size here. For that matter even used size might exceed the 
container size if we don't receive container reports in time (containers will 
be closed while processing container reports).

>> document the new configuration key for 
>> ozone.scm.container.close.threshold.percentage in ozone-default.xml?
Done.

> Ozone: SCM: Handling container report with key count and container usage.
> -
>
> Key: HDFS-12474
> URL: https://issues.apache.org/jira/browse/HDFS-12474
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Nanda kumar
>  Labels: ozoneMerge
> Attachments: HDFS-12474-HDFS-7240.000.patch, 
> HDFS-12474-HDFS-7240.001.patch
>
>
> Currently, the container report only contains the # of reports sent to SCM. 
> We will need to provide the key count and the usage of each individual 
> containers to update the SCM container state maintained by 
> ContainerStateManager. This has a dependency on HDFS-12387.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12474) Ozone: SCM: Handling container report with key count and container usage.

2017-10-31 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12474:
---
Attachment: HDFS-12474-HDFS-7240.001.patch

> Ozone: SCM: Handling container report with key count and container usage.
> -
>
> Key: HDFS-12474
> URL: https://issues.apache.org/jira/browse/HDFS-12474
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Nanda kumar
>  Labels: ozoneMerge
> Attachments: HDFS-12474-HDFS-7240.000.patch, 
> HDFS-12474-HDFS-7240.001.patch
>
>
> Currently, the container report only contains the # of reports sent to SCM. 
> We will need to provide the key count and the usage of each individual 
> containers to update the SCM container state maintained by 
> ContainerStateManager. This has a dependency on HDFS-12387.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12667) KMSClientProvider#ValueQueue does synchronous fetch of edeks in background async thread.

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227682#comment-16227682
 ] 

Hadoop QA commented on HDFS-12667:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 20 unchanged - 4 fixed = 21 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
34s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895068/HDFS-12667-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7bb82d218c77 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ed24da3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21905/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21905/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21905/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-12737) Thousands of sockets lingering in TIME_WAIT state due to frequent file open operations

2017-10-31 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227678#comment-16227678
 ] 

Jitendra Nath Pandey commented on HDFS-12737:
-

   BlockTokenSelector uses only Token-Kind to match the token, therefore you 
would need to either change the selector, or make sure the UGI has only one 
token. The current-user could be trying to read/write multiple files in 
parallel, and therefore, dealing with multiple tokens at an instant. 
   
   

> Thousands of sockets lingering in TIME_WAIT state due to frequent file open 
> operations
> --
>
> Key: HDFS-12737
> URL: https://issues.apache.org/jira/browse/HDFS-12737
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ipc
> Environment: CDH5.10.2, HBase Multi-WAL=2, 250 replication peers
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> On a HBase cluster we found HBase RegionServers have thousands of sockets in 
> TIME_WAIT state. It depleted system resources and caused other services to 
> fail.
> After months of troubleshooting, we found the issue is the cluster has 
> hundreds of replication peers, and has multi-WAL = 2. That creates hundreds 
> of replication threads in HBase RS, and each thread opens WAL file *every 
> second*.
> We found that the IPC client closes socket right away, and does not reuse 
> socket connection. Since each closed socket stays in TIME_WAIT state for 60 
> seconds in Linux by default, that generates thousands of TIME_WAIT sockets.
> {code:title=ClientDatanodeProtocolTranslatorPB:createClientDatanodeProtocolProxy}
> // Since we're creating a new UserGroupInformation here, we know that no
> // future RPC proxies will be able to re-use the same connection. And
> // usages of this proxy tend to be one-off calls.
> //
> // This is a temporary fix: callers should really achieve this by using
> // RPC.stopProxy() on the resulting object, but this is currently not
> // working in trunk. See the discussion on HDFS-1965.
> Configuration confWithNoIpcIdle = new Configuration(conf);
> confWithNoIpcIdle.setInt(CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, 0);
> {code}
> This piece of code is used in DistributedFileSystem#open()
> {noformat}
> 2017-10-27 14:01:44,152 DEBUG org.apache.hadoop.ipc.Client: New connection 
> Thread[IPC Client (1838187805) connection to /172.131.21.48:20001 from 
> blk_1013754707_14032,5,main] for remoteId /172.131.21.48:20001
> java.lang.Throwable: For logging stack trace, not a real exception
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1556)
> at org.apache.hadoop.ipc.Client.call(Client.java:1482)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy28.getReplicaVisibleLength(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolTranslatorPB.java:198)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:365)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:335)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:263)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:322)
> at 
> org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:783)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:747)
>

[jira] [Updated] (HDFS-12750) Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12750:
--
Status: Patch Available  (was: Open)

> Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions
> 
>
> Key: HDFS-12750
> URL: https://issues.apache.org/jira/browse/HDFS-12750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12750-HDFS-7240.001.patch
>
>
> Some of the newly added ozone tests need to shutdown the MiniOzoneCluster so 
> that the metadata db and test files are cleaned up for subsequent tests.
> TestStorageContainerManager#testBlockDeletionTransactions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12697:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~bharatviswa] for the contribution and all for the reviews. I've commit 
the patch to the feature branch. 

> Ozone services must stay disabled in secure setup for alpha
> ---
>
> Key: HDFS-12697
> URL: https://issues.apache.org/jira/browse/HDFS-12697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: HDFS-7240
>
> Attachments: HDFS-12697-HDFS-7240.01.patch, 
> HDFS-12697-HDFS-7240.02.patch, HDFS-12697-HDFS-7240.03.patch, 
> HDFS-12697-HDFS-7240.04.patch
>
>
> When security is enabled, ozone services should not start up, even if ozone 
> configurations are enabled. This is important to ensure a user experimenting 
> with ozone doesn't inadvertently get exposed to attacks. Specifically,
> 1) KSM should not start up.
> 2) SCM should not startup.
> 3) Datanode's ozone xceiverserver should not startup, and must not listen on 
> a port.
> 4) Datanode's ozone handler port should not be open, and webservice must stay 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227650#comment-16227650
 ] 

Hadoop QA commented on HDFS-11807:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
46s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
6s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}321m 23s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}404m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | memcheck_hdfs_config_connect_bugs |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-11807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12894994/HDFS-11807.HDFS-8707.003.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 09a70323b4eb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-8707 / 9d35dff |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21894/artifact/out/whitespace-eol.txt
 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21894/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21894/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21894/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21894/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> 

[jira] [Updated] (HDFS-12750) Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12750:
--
Attachment: HDFS-12750-HDFS-7240.001.patch

> Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions
> 
>
> Key: HDFS-12750
> URL: https://issues.apache.org/jira/browse/HDFS-12750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12750-HDFS-7240.001.patch
>
>
> Some of the newly added ozone tests need to shutdown the MiniOzoneCluster so 
> that the metadata db and test files are cleaned up for subsequent tests.
> TestStorageContainerManager#testBlockDeletionTransactions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12750) Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12750:
--
Summary: Ozone: Fix 
TestStorageContainerManager#testBlockDeletionTransactions  (was: Ozone: 
Stabilize Ozone unit tests)

> Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions
> 
>
> Key: HDFS-12750
> URL: https://issues.apache.org/jira/browse/HDFS-12750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Some of the newly added ozone tests need to shutdown the MiniOzoneCluster so 
> that the metadata db and test files are cleaned up for subsequent tests.
> TestStorageContainerManager#testBlockDeletionTransactions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha

2017-10-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227635#comment-16227635
 ] 

Xiaoyu Yao commented on HDFS-12697:
---

+1, I will commit it shortly.

> Ozone services must stay disabled in secure setup for alpha
> ---
>
> Key: HDFS-12697
> URL: https://issues.apache.org/jira/browse/HDFS-12697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-12697-HDFS-7240.01.patch, 
> HDFS-12697-HDFS-7240.02.patch, HDFS-12697-HDFS-7240.03.patch, 
> HDFS-12697-HDFS-7240.04.patch
>
>
> When security is enabled, ozone services should not start up, even if ozone 
> configurations are enabled. This is important to ensure a user experimenting 
> with ozone doesn't inadvertently get exposed to attacks. Specifically,
> 1) KSM should not start up.
> 2) SCM should not startup.
> 3) Datanode's ozone xceiverserver should not startup, and must not listen on 
> a port.
> 4) Datanode's ozone handler port should not be open, and webservice must stay 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227627#comment-16227627
 ] 

Lei (Eddy) Xu commented on HDFS-12682:
--

+1. Thanks Xiao.

> ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as 
> DISABLED
> 
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch, 
> HDFS-12682.03.patch, HDFS-12682.04.patch, HDFS-12682.05.patch, 
> HDFS-12682.06.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227604#comment-16227604
 ] 

Hadoop QA commented on HDFS-11902:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
5s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 52s{color} | {color:orange} root: The patch generated 9 new + 448 unchanged 
- 11 fixed = 457 total (was 459) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m  2s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 300 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestRestartDFS |
|   | 

[jira] [Commented] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227600#comment-16227600
 ] 

Hadoop QA commented on HDFS-12705:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
48s{color} | {color:red} The patch generated 156 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSMkdirs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12894872/HDFS-12705.002.patch |
| Optional Tests |  asflicense  compile  javac  

[jira] [Commented] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227572#comment-16227572
 ] 

Íñigo Goiri commented on HDFS-12699:


[~andrew.wang] added 2.9.0 and 3.0.0.
Not sure if there is some of the intermediate that should be marked.
Let me know.

> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12699:
---
Fix Version/s: 3.0.0
   2.9.0

> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Target Version/s: 3.1.0  (was: 3.0.0)

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227560#comment-16227560
 ] 

Chris Douglas commented on HDFS-7878:
-

Reverted from 3.0.0, will target 3.1.0.

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.1.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

Fix Version/s: (was: 3.0.0)
   3.1.0

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.1.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227543#comment-16227543
 ] 

Hadoop QA commented on HDFS-12697:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12697 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895033/HDFS-12697-HDFS-7240.04.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 0313cac73717 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 6739180 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227540#comment-16227540
 ] 

Chris Douglas commented on HDFS-12681:
--

bq. In the interest of keeping the branch releasable, could we revert for now, 
possibly retargeting for 3.1.0?
OK, sure.

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227517#comment-16227517
 ] 

Andrew Wang commented on HDFS-12681:


Thanks for bringing this up Chris. In the interest of keeping the branch 
releasable, could we revert for now, possibly retargeting for 3.1.0?

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12667) KMSClientProvider#ValueQueue does synchronous fetch of edeks in background async thread.

2017-10-31 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12667:
--
Status: Patch Available  (was: Open)

> KMSClientProvider#ValueQueue does synchronous fetch of edeks in background 
> async thread.
> 
>
> Key: HDFS-12667
> URL: https://issues.apache.org/jira/browse/HDFS-12667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms
>Affects Versions: 3.0.0-alpha4
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12667-001.patch, HDFS-12667-002.patch
>
>
> There are couple of issues in KMSClientProvider#ValueQueue.
> 1.
>  {code:title=ValueQueue.java|borderStyle=solid}
>   private final LoadingCache keyQueues;
>   // Stripped rwlocks based on key name to synchronize the queue from
>   // the sync'ed rw-thread and the background async refill thread.
>   private final List lockArray =
>   new ArrayList<>(LOCK_ARRAY_SIZE);
> {code}
> It hashes the key name into 16 buckets.
> In the code chunk below,
>  {code:title=ValueQueue.java|borderStyle=solid}
> public List getAtMost(String keyName, int num) throws IOException,
>   ExecutionException {
>  ...
>  ...
>  readLock(keyName);
> E val = keyQueue.poll();
> readUnlock(keyName);
>  ...
>   }
>   private void submitRefillTask(final String keyName,
>   final Queue keyQueue) throws InterruptedException {
>   ...
>   ...
>   writeLock(keyName); // It holds the write lock while the key is 
> being asynchronously fetched. So the read requests for all the keys that 
> hashes to this bucket will essentially be blocked.
>   try {
> if (keyQueue.size() < threshold && !isCanceled()) {
>   refiller.fillQueueForKey(name, keyQueue,
>   cacheSize - keyQueue.size());
> }
>  ...
>   } finally {
> writeUnlock(keyName);
>   }
> }
>   }
> {code}
> According to above code chunk, if two keys (lets say key1 and key2) hashes to 
> the same bucket (between 1 and 16), then if key1 is asynchronously being 
> refetched then all the getKey for key2 will be blocked.
> 2. Due to stripped rw locks, the asynchronous behavior of refill keys is now 
> synchronous to other handler threads.
> I understand that locks were added so that we don't kick off multiple 
> asynchronous refilling thread for the same key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12667) KMSClientProvider#ValueQueue does synchronous fetch of edeks in background async thread.

2017-10-31 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12667:
--
Attachment: HDFS-12667-002.patch

bq. IMO we should throw as an IOE to tell the caller "failed to drain".
It does no good throwing an IOE also. Since all the callers 
{{KMSClientProvider#drain}} and 
{{EagerKeyGeneratorKeyProviderCryptoExtension.CryptoExtension#drain}} can't 
throw anything back.
{{ValueQueue#drain}} will throw {{ExecutionException}} only if its not able to 
load values from {{CacheLoader}} so IMO its safe to ignore the exception.
On the other hand, I am thinking to remover CacheLoader in the first place.

bq. A little confused by this comment, {{drainAgain]} is protected by the 
{{PolicyBasedQueue]} object lock right? Could you elaborate?
Fixed in #2 patch.

bq. Is this supposed to be called outside of the class?
Removed in #2 patch.

> KMSClientProvider#ValueQueue does synchronous fetch of edeks in background 
> async thread.
> 
>
> Key: HDFS-12667
> URL: https://issues.apache.org/jira/browse/HDFS-12667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms
>Affects Versions: 3.0.0-alpha4
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12667-001.patch, HDFS-12667-002.patch
>
>
> There are couple of issues in KMSClientProvider#ValueQueue.
> 1.
>  {code:title=ValueQueue.java|borderStyle=solid}
>   private final LoadingCache keyQueues;
>   // Stripped rwlocks based on key name to synchronize the queue from
>   // the sync'ed rw-thread and the background async refill thread.
>   private final List lockArray =
>   new ArrayList<>(LOCK_ARRAY_SIZE);
> {code}
> It hashes the key name into 16 buckets.
> In the code chunk below,
>  {code:title=ValueQueue.java|borderStyle=solid}
> public List getAtMost(String keyName, int num) throws IOException,
>   ExecutionException {
>  ...
>  ...
>  readLock(keyName);
> E val = keyQueue.poll();
> readUnlock(keyName);
>  ...
>   }
>   private void submitRefillTask(final String keyName,
>   final Queue keyQueue) throws InterruptedException {
>   ...
>   ...
>   writeLock(keyName); // It holds the write lock while the key is 
> being asynchronously fetched. So the read requests for all the keys that 
> hashes to this bucket will essentially be blocked.
>   try {
> if (keyQueue.size() < threshold && !isCanceled()) {
>   refiller.fillQueueForKey(name, keyQueue,
>   cacheSize - keyQueue.size());
> }
>  ...
>   } finally {
> writeUnlock(keyName);
>   }
> }
>   }
> {code}
> According to above code chunk, if two keys (lets say key1 and key2) hashes to 
> the same bucket (between 1 and 16), then if key1 is asynchronously being 
> refetched then all the getKey for key2 will be blocked.
> 2. Due to stripped rw locks, the asynchronous behavior of refill keys is now 
> synchronous to other handler threads.
> I understand that locks were added so that we don't kick off multiple 
> asynchronous refilling thread for the same key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227493#comment-16227493
 ] 

Hadoop QA commented on HDFS-12498:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12894808/HDFS-12498.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux acfadca45c97 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f681fa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21899/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21899/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21899/console |

[jira] [Comment Edited] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227471#comment-16227471
 ] 

Chris Douglas edited comment on HDFS-12681 at 10/31/17 8:32 PM:


Rebase on HDFS-7878


was (Author: chris.douglas):
Rebase patch

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.10.patch

Rebase patch

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12682:
-
Attachment: HDFS-12682.06.patch

Patch 6 to address Sammi's comment about toStringWithState, good catch!

Please let me know how you feel about the addPolicies, thanks.

> ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as 
> DISABLED
> 
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch, 
> HDFS-12682.03.patch, HDFS-12682.04.patch, HDFS-12682.05.patch, 
> HDFS-12682.06.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12667) KMSClientProvider#ValueQueue does synchronous fetch of edeks in background async thread.

2017-10-31 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12667:
--
Status: Open  (was: Patch Available)

> KMSClientProvider#ValueQueue does synchronous fetch of edeks in background 
> async thread.
> 
>
> Key: HDFS-12667
> URL: https://issues.apache.org/jira/browse/HDFS-12667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms
>Affects Versions: 3.0.0-alpha4
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12667-001.patch
>
>
> There are couple of issues in KMSClientProvider#ValueQueue.
> 1.
>  {code:title=ValueQueue.java|borderStyle=solid}
>   private final LoadingCache keyQueues;
>   // Stripped rwlocks based on key name to synchronize the queue from
>   // the sync'ed rw-thread and the background async refill thread.
>   private final List lockArray =
>   new ArrayList<>(LOCK_ARRAY_SIZE);
> {code}
> It hashes the key name into 16 buckets.
> In the code chunk below,
>  {code:title=ValueQueue.java|borderStyle=solid}
> public List getAtMost(String keyName, int num) throws IOException,
>   ExecutionException {
>  ...
>  ...
>  readLock(keyName);
> E val = keyQueue.poll();
> readUnlock(keyName);
>  ...
>   }
>   private void submitRefillTask(final String keyName,
>   final Queue keyQueue) throws InterruptedException {
>   ...
>   ...
>   writeLock(keyName); // It holds the write lock while the key is 
> being asynchronously fetched. So the read requests for all the keys that 
> hashes to this bucket will essentially be blocked.
>   try {
> if (keyQueue.size() < threshold && !isCanceled()) {
>   refiller.fillQueueForKey(name, keyQueue,
>   cacheSize - keyQueue.size());
> }
>  ...
>   } finally {
> writeUnlock(keyName);
>   }
> }
>   }
> {code}
> According to above code chunk, if two keys (lets say key1 and key2) hashes to 
> the same bucket (between 1 and 16), then if key1 is asynchronously being 
> refetched then all the getKey for key2 will be blocked.
> 2. Due to stripped rw locks, the asynchronous behavior of refill keys is now 
> synchronous to other handler threads.
> I understand that locks were added so that we don't kick off multiple 
> asynchronous refilling thread for the same key.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect

2017-10-31 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227459#comment-16227459
 ] 

Erik Krogen edited comment on HDFS-12219 at 10/31/17 8:25 PM:
--

Hey [~ajisakaa], thank you for looking! I am not a committer, care to put this 
into trunk for me? 

Thank you [~hanishakoneru] for looking as well!


was (Author: xkrogen):
Hey [~ajisakaa], I am not a committer, care to put this into trunk for me?

> Javadoc for FSNamesystem#getMaxObjects is incorrect
> ---
>
> Key: HDFS-12219
> URL: https://issues.apache.org/jira/browse/HDFS-12219
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-12219.000.patch
>
>
> The Javadoc states that this represents the total number of objects in the 
> system, but it really represents the maximum allowed number of objects (as 
> correctly stated on the Javadoc for {{FSNamesystemMBean#getMaxObjects()}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect

2017-10-31 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227459#comment-16227459
 ] 

Erik Krogen commented on HDFS-12219:


Hey [~ajisakaa], I am not a committer, care to put this into trunk for me?

> Javadoc for FSNamesystem#getMaxObjects is incorrect
> ---
>
> Key: HDFS-12219
> URL: https://issues.apache.org/jira/browse/HDFS-12219
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-12219.000.patch
>
>
> The Javadoc states that this represents the total number of objects in the 
> system, but it really represents the maximum allowed number of objects (as 
> correctly stated on the Javadoc for {{FSNamesystemMBean#getMaxObjects()}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect

2017-10-31 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12219:
---
Target Version/s: 3.1.0

> Javadoc for FSNamesystem#getMaxObjects is incorrect
> ---
>
> Key: HDFS-12219
> URL: https://issues.apache.org/jira/browse/HDFS-12219
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-12219.000.patch
>
>
> The Javadoc states that this represents the total number of objects in the 
> system, but it really represents the maximum allowed number of objects (as 
> correctly stated on the Javadoc for {{FSNamesystemMBean#getMaxObjects()}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227458#comment-16227458
 ] 

Chris Douglas commented on HDFS-12681:
--

[~andrew.wang] Since the RC for 3.0.0 is planned for this week, either this and 
HDFS-7878 should be part of the release or neither should be. I think this 
improves the existing logic, but there is a risk that applications may require 
changes if they use {{instanceof}} instead of consistently using the 
{{FileSystem}} methods.

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, 
> HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, 
> HDFS-12681.08.patch, HDFS-12681.09.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12747) Lease monitor may infinitely loop on the same lease

2017-10-31 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227445#comment-16227445
 ] 

Daryn Sharp commented on HDFS-12747:


It involves decomming.  HDFS-11499 attempted a partial fix the problem.  About 
2.5yr I internally patched symptoms of the problem in a more complete way, but 
based on what I understand now is not entirely insufficient.

> Lease monitor may infinitely loop on the same lease
> ---
>
> Key: HDFS-12747
> URL: https://issues.apache.org/jira/browse/HDFS-12747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> Lease recovery incorrectly handles UC files if the last block is complete but 
> the penultimate block is committed.  Incorrectly handles is the euphemism for 
> infinitely loops for days and leaves all abandoned streams open until 
> customers complain.
> The problem may manifest when:
> # Block1 is committed but seemingly never completed
> # Block2 is allocated
> # Lease recovery is initiated for block2
> # Commit block synchronization invokes {{FSNamesytem#closeFileCommitBlocks}}, 
> causing:
> #* {{commitOrCompleteLastBlock}} to mark block2 as complete
> #* 
> {{finalizeINodeFileUnderConstruction}}/{{INodeFile.assertAllBlocksComplete}} 
> to throw {{IllegalStateException}} because the penultimate block1 is 
> "COMMITTED but not COMPLETE"
> # The next lease recovery results in an infinite loop.
> The {{LeaseManager}} expects that {{FSNamesystem#internalReleaseLease}} will 
> either init recovery and renew the lease, or remove the lease.  In the 
> described state it does neither.  The switch case will break out if the last 
> block is complete.  (The case statement ironically contains an assert).  
> Since nothing changed, the lease is still the “next” lease to be processed.  
> The lease monitor loops for 25ms on the same lease, sleeps for 2s, loops on 
> it again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12747) Lease monitor may infinitely loop on the same lease

2017-10-31 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HDFS-12747:
--

Assignee: Daryn Sharp

> Lease monitor may infinitely loop on the same lease
> ---
>
> Key: HDFS-12747
> URL: https://issues.apache.org/jira/browse/HDFS-12747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> Lease recovery incorrectly handles UC files if the last block is complete but 
> the penultimate block is committed.  Incorrectly handles is the euphemism for 
> infinitely loops for days and leaves all abandoned streams open until 
> customers complain.
> The problem may manifest when:
> # Block1 is committed but seemingly never completed
> # Block2 is allocated
> # Lease recovery is initiated for block2
> # Commit block synchronization invokes {{FSNamesytem#closeFileCommitBlocks}}, 
> causing:
> #* {{commitOrCompleteLastBlock}} to mark block2 as complete
> #* 
> {{finalizeINodeFileUnderConstruction}}/{{INodeFile.assertAllBlocksComplete}} 
> to throw {{IllegalStateException}} because the penultimate block1 is 
> "COMMITTED but not COMPLETE"
> # The next lease recovery results in an infinite loop.
> The {{LeaseManager}} expects that {{FSNamesystem#internalReleaseLease}} will 
> either init recovery and renew the lease, or remove the lease.  In the 
> described state it does neither.  The switch case will break out if the last 
> block is complete.  (The case statement ironically contains an assert).  
> Since nothing changed, the lease is still the “next” lease to be processed.  
> The lease monitor loops for 25ms on the same lease, sleeps for 2s, loops on 
> it again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-10-31 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227443#comment-16227443
 ] 

Wei Yan commented on HDFS-11384:


Gotcha, thanks for the explaination, [~shv]

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: Konstantin Shvachko
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11384-007.patch, HDFS-11384-branch-2.7.011.patch, 
> HDFS-11384-branch-2.8.011.patch, HDFS-11384.001.patch, HDFS-11384.002.patch, 
> HDFS-11384.003.patch, HDFS-11384.004.patch, HDFS-11384.005.patch, 
> HDFS-11384.006.patch, HDFS-11384.008.patch, HDFS-11384.009.patch, 
> HDFS-11384.010.patch, HDFS-11384.011.patch, balancer.day.png, 
> balancer.week.png
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227442#comment-16227442
 ] 

Hadoop QA commented on HDFS-12396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 1 new + 242 unchanged 
- 2 fixed = 243 total (was 244) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}242m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895009/HDFS-12396.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47ead4fb3639 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 

[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-10-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12486:
-
Release Note: 
Adds a getconf command option to list the journal nodes.
Usage: hdfs getconf -journalnodes

  was:
GetConf command has an option to list journal nodes.
Usage: hdfs getconf -journalnodes


> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227427#comment-16227427
 ] 

Andrew Wang commented on HDFS-11096:


Sean, given that we've run HDFS rolling upgrades successfully, do you think we 
can pick this up in a later 3.0.x release?

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Sean Mackrory
>Priority: Blocker
> Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, 
> HDFS-11096.003.patch, HDFS-11096.004.patch, HDFS-11096.005.patch, 
> HDFS-11096.006.patch, HDFS-11096.007.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Status: Patch Available  (was: Open)

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12607:
--
Status: Open  (was: Patch Available)

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227422#comment-16227422
 ] 

Andrew Wang commented on HDFS-12699:


Inigo, do you mind setting appropriate fix versions for this JIRA?

> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12485) expunge may fail to remove trash from encryption zone

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227404#comment-16227404
 ] 

Andrew Wang commented on HDFS-12485:


I think the branch-3.0 backport was missed, I backported it.

> expunge may fail to remove trash from encryption zone
> -
>
> Key: HDFS-12485
> URL: https://issues.apache.org/jira/browse/HDFS-12485
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.9.0, 2.8.3, 3.0.0
>
> Attachments: HDFS-12485.001.patch
>
>
> This is related to HDFS-12484, but turns out that even if I have super user 
> permission, -expunge may not remove trash either.
> If I log into Linux as root, and then login as the superuser h...@example.com
> {noformat}
> [root@nightly511-1 ~]# hdfs dfs -rm /scale/b
> 17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
> trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> [root@nightly511-1 ~]# hdfs dfs -expunge
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
> /user/hdfs/.Trash/170918143916
> 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
> TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
> [root@nightly511-1 ~]# hdfs dfs -ls 
> hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> -rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
> hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
> {noformat}
> expunge does not remove trash under /scale, because it does not know I am 
> 'hdfs' user.
> {code:title=DistributedFileSystem#getTrashRoots}
> Path ezTrashRoot = new Path(it.next().getPath(),
> FileSystem.TRASH_PREFIX);
> if (!exists(ezTrashRoot)) {
>   continue;
> }
> if (allUsers) {
>   for (FileStatus candidate : listStatus(ezTrashRoot)) {
> if (exists(candidate.getPath())) {
>   ret.add(candidate);
> }
>   }
> } else {
>   Path userTrash = new Path(ezTrashRoot, System.getProperty(
>   "user.name")); --> bug
>   try {
> ret.add(getFileStatus(userTrash));
>   } catch (FileNotFoundException ignored) {
>   }
> }
> {code}
> It should use UGI for user name, rather than system login user name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2017-10-31 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227402#comment-16227402
 ] 

Kihwal Lee commented on HDFS-12749:
---

The rpc calls are timing out because the NN is not able to serve them fast 
enough. Influx of full block reports can slow things down.  First of all, they 
can timeout but will be retried. I.e. datanodes will retransmit.  Also, make 
sure your NN is configured correctly. E.g. tcp listen queue size, # of 
handlers, etc.  are enough to absorb surges in requests.  If there are too many 
large block reports, it may not be possible to completely avoid 
timeout-retransmission. This also increases the amount/size of objects sitting 
in the heap and potentially promoted to the old gen prematurely, increasing 
chance of a full GC.   To lessen the memory pressure during the block report 
surges, config your cluster to break down full block report down to individual 
storage level. That way, each RPC will be smaller.

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: TanYuxin
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode 

[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock#testPendingDeleteUnknownBlocks fails intermittently

2017-10-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12495:
---
Fix Version/s: (was: 3.0.0)

> TestPendingInvalidateBlock#testPendingDeleteUnknownBlocks fails intermittently
> --
>
> Key: HDFS-12495
> URL: https://issues.apache.org/jira/browse/HDFS-12495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Badger
>Assignee: Eric Badger
>  Labels: flaky-test
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0
>
> Attachments: HDFS-12495.001.patch, HDFS-12495.002.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:36701] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:546)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
>   at org.apache.hadoop.ipc.Server.(Server.java:2655)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-10-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12523:
---
Fix Version/s: (was: 3.0.0)

> Thread pools in ErasureCodingWorker do not shutdown
> ---
>
> Key: HDFS-12523
> URL: https://issues.apache.org/jira/browse/HDFS-12523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Huafeng Wang
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12523.001.patch, HDFS-12523.002.patch
>
>
> There is no code path in {{ErasureCodingWorker}} to shutdown its two thread 
> pools: {{stripedReconstructionPool}} and {{stripedReadPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12750) Ozone: Stabilize Ozone unit tests

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12750:
--
Description: 
Some of the newly added ozone tests need to shutdown the MiniOzoneCluster so 
that the metadata db and test files are cleaned up for subsequent tests.

TestStorageContainerManager#testBlockDeletionTransactions


  was:Test needs to shutdown the MiniOzoneCluster so that the metadata db and 
test files are cleaned up for subsequent tests.


> Ozone: Stabilize Ozone unit tests
> -
>
> Key: HDFS-12750
> URL: https://issues.apache.org/jira/browse/HDFS-12750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Some of the newly added ozone tests need to shutdown the MiniOzoneCluster so 
> that the metadata db and test files are cleaned up for subsequent tests.
> TestStorageContainerManager#testBlockDeletionTransactions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227399#comment-16227399
 ] 

Andrew Wang commented on HDFS-12544:


I think the branch-3.0 backport was missed, I backported it.

> SnapshotDiff - support diff generation on any snapshot root descendant 
> directory
> 
>
> Key: HDFS-12544
> URL: https://issues.apache.org/jira/browse/HDFS-12544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0
>
> Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, 
> HDFS-12544.03.patch, HDFS-12544.04.patch, HDFS-12544.05.patch
>
>
> {noformat}
> # hdfs snapshotDiff   
> 
> {noformat}
> Using snapshot diff command, we can generate a diff report between any two 
> given snapshots under a snapshot root directory. The command today only 
> accepts the path that is a snapshot root. There are many deployments where 
> the snapshot root is configured at the higher level directory but the diff 
> report needed is only for a specific directory under the snapshot root. In 
> these cases, the diff report can be filtered for changes pertaining to the 
> directory we are interested in. But when the snapshot root directory is very 
> huge, the snapshot diff report generation can take minutes even if we are 
> interested to know the changes only in a small directory. So, it would be 
> highly performant if the diff report calculation can be limited to only the 
> interesting sub-directory of the snapshot root instead of the whole snapshot 
> root.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12750) Ozone: Stabilize Ozone unit tests

2017-10-31 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12750:
--
Summary: Ozone: Stabilize Ozone unit tests  (was: Ozone: Fix 
TestStorageContainerManager#testBlockDeletionTransactions)

> Ozone: Stabilize Ozone unit tests
> -
>
> Key: HDFS-12750
> URL: https://issues.apache.org/jira/browse/HDFS-12750
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Test needs to shutdown the MiniOzoneCluster so that the metadata db and test 
> files are cleaned up for subsequent tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12573) Divide the total block metrics into replica and ec

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227397#comment-16227397
 ] 

Andrew Wang commented on HDFS-12573:


I think the branch-3.0 backport was missed, I backported it.

> Divide the total block metrics into replica and ec
> --
>
> Key: HDFS-12573
> URL: https://issues.apache.org/jira/browse/HDFS-12573
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, metrics, namenode
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Fix For: 3.0.0
>
> Attachments: HDFS-12573.1.patch, HDFS-12573.2.patch, 
> HDFS-12573.3.patch
>
>
> Following HDFS-10999, let's separate total blocks metrics. It would be useful 
> for administrators.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227395#comment-16227395
 ] 

Andrew Wang commented on HDFS-12614:


I think the branch-3.0 backport was missed, I backported it.

> FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider 
> configured
> --
>
> Key: HDFS-12614
> URL: https://issues.apache.org/jira/browse/HDFS-12614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0
>
> Attachments: HDFS-12614.01.patch, HDFS-12614.02.patch, 
> HDFS-12614.03.patch, HDFS-12614.04.patch, HDFS-12614.test.01.patch
>
>
> When INodeAttributesProvider is configured, and when resolving path (like 
> "/") and checking for permission, the following code when working on 
> {{pathByNameArr}} throws NullPointerException. 
> {noformat}
>   private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx,
>   INode inode, int snapshotId) {
> INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
> if (getAttributesProvider() != null) {
>   String[] elements = new String[pathIdx + 1];
>   for (int i = 0; i < elements.length; i++) {
> elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);  <===
>   }
>   inodeAttrs = getAttributesProvider().getAttributes(elements, 
> inodeAttrs);
> }
> return inodeAttrs;
>   }
> {noformat}
> Looks like for paths like "/" where the split components based on delimiter 
> "/" can be null, the pathByNameArr array can have null elements and can throw 
> NPE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12619) Do not catch and throw unchecked exceptions if IBRs fail to process

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227394#comment-16227394
 ] 

Andrew Wang commented on HDFS-12619:


I think the branch-3.0 commit was missed, I backported it.

> Do not catch and throw unchecked exceptions if IBRs fail to process
> ---
>
> Key: HDFS-12619
> URL: https://issues.apache.org/jira/browse/HDFS-12619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.9.0, 2.8.3, 3.0.0
>
> Attachments: HDFS-12619.001.patch
>
>
> HDFS-9198 added the following code
> {code:title=BlockManager#processIncrementalBlockReport}
> public void processIncrementalBlockReport(final DatanodeID nodeID,
>   final StorageReceivedDeletedBlocks srdb) throws IOException {
> ...
> try {
>   processIncrementalBlockReport(node, srdb);
> } catch (Exception ex) {
>   node.setForceRegistration(true);
>   throw ex;
> }
>   }
> {code}
> In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler. 
> However, when I attempted to backport it to a CDH5.3 release (based on Apache 
> Hadoop 2.5.0), the compiler complains the exception is unhandled, because the 
> method defines it throws IOException instead of Exception.
> While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a 
> good practice to catch an unchecked exception and then rethrow it. How about 
> rewriting it with a finally block and a conditional variable?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12622) Fix enumerate in HDFSErasureCoding.md

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227392#comment-16227392
 ] 

Andrew Wang commented on HDFS-12622:


I think the branch-3.0 commit was missed, I backported it.

> Fix enumerate in HDFSErasureCoding.md
> -
>
> Key: HDFS-12622
> URL: https://issues.apache.org/jira/browse/HDFS-12622
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-12622.001.patch, HDFS-12622.001.patch, Screen Shot 
> 2017-10-10 at 17.36.16.png, screenshot.png
>
>
> {noformat}
>   HDFS native implementation of default RS codec leverages Intel ISA-L 
> library to improve the encoding and decoding calculation. To enable and use 
> Intel ISA-L, there are three steps.
>   1. Build ISA-L library. Please refer to the official site 
> "https://github.com/01org/isa-l/; for detail information.
>   2. Build Hadoop with ISA-L support. Please refer to "Intel ISA-L build 
> options" section in "Build instructions for Hadoop" in (BUILDING.txt) in the 
> source code.
>   3. Use `-Dbundle.isal` to copy the contents of the `isal.lib` directory 
> into the final tar file. Deploy Hadoop with the tar file. Make sure ISA-L is 
> available on HDFS clients and DataNodes.
> {noformat}
> Missing empty line before enumerate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-10-31 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12549:
---
Attachment: HDFS-12549-HDFS-7240.003.patch

Patch upload after rebase.

> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12750) Ozone: Fix TestStorageContainerManager#testBlockDeletionTransactions

2017-10-31 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12750:
-

 Summary: Ozone: Fix 
TestStorageContainerManager#testBlockDeletionTransactions
 Key: HDFS-12750
 URL: https://issues.apache.org/jira/browse/HDFS-12750
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Test needs to shutdown the MiniOzoneCluster so that the metadata db and test 
files are cleaned up for subsequent tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-10-31 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12705:
--
Status: Patch Available  (was: Open)

> WebHdfsFileSystem exceptions should retain the caused by exception
> --
>
> Key: HDFS-12705
> URL: https://issues.apache.org/jira/browse/HDFS-12705
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Hanisha Koneru
> Attachments: HDFS-12705.001.patch, HDFS-12705.002.patch
>
>
> {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host 
> to the exception.  While it preserves the original stacktrace, it omits the 
> original cause which complicates debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.011.patch

Patch v011 fixes a missing java doc. Will commit this once jenkins comes back.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227303#comment-16227303
 ] 

Virajith Jalaparti edited comment on HDFS-11902 at 10/31/17 6:42 PM:
-

Patch v011 fixes a missing java doc. Will commit this once jenkins comes back.

Thanks for reviewing this [~ehiggs].


was (Author: virajith):
Patch v011 fixes a missing java doc. Will commit this once jenkins comes back.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch, HDFS-11902-HDFS-9806.011.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, 
> HDFS-11902-HDFS-9806.010.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha

2017-10-31 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227269#comment-16227269
 ] 

Bharat Viswanadham commented on HDFS-12697:
---

[~xyao]
Thank You for review.
Attached patch v04 to fix checkstyle issues and shell check warnings. 

> Ozone services must stay disabled in secure setup for alpha
> ---
>
> Key: HDFS-12697
> URL: https://issues.apache.org/jira/browse/HDFS-12697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-12697-HDFS-7240.01.patch, 
> HDFS-12697-HDFS-7240.02.patch, HDFS-12697-HDFS-7240.03.patch, 
> HDFS-12697-HDFS-7240.04.patch
>
>
> When security is enabled, ozone services should not start up, even if ozone 
> configurations are enabled. This is important to ensure a user experimenting 
> with ozone doesn't inadvertently get exposed to attacks. Specifically,
> 1) KSM should not start up.
> 2) SCM should not startup.
> 3) Datanode's ozone xceiverserver should not startup, and must not listen on 
> a port.
> 4) Datanode's ozone handler port should not be open, and webservice must stay 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha

2017-10-31 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12697:
--
Attachment: HDFS-12697-HDFS-7240.04.patch

> Ozone services must stay disabled in secure setup for alpha
> ---
>
> Key: HDFS-12697
> URL: https://issues.apache.org/jira/browse/HDFS-12697
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDFS-12697-HDFS-7240.01.patch, 
> HDFS-12697-HDFS-7240.02.patch, HDFS-12697-HDFS-7240.03.patch, 
> HDFS-12697-HDFS-7240.04.patch
>
>
> When security is enabled, ozone services should not start up, even if ozone 
> configurations are enabled. This is important to ensure a user experimenting 
> with ozone doesn't inadvertently get exposed to attacks. Specifically,
> 1) KSM should not start up.
> 2) SCM should not startup.
> 3) Datanode's ozone xceiverserver should not startup, and must not listen on 
> a port.
> 4) Datanode's ozone handler port should not be open, and webservice must stay 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12682) ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as DISABLED

2017-10-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227240#comment-16227240
 ] 

Xiao Chen commented on HDFS-12682:
--

Thanks for the review Sammi.

I also thought about whether we need to provide state for add / get too. :) I'm 
not sure whether the ec admin would need to see the state from those APIs 
either, m2c is it should be fine as long as they can get from -listPolicies, 
and the add / get doesn't return wrong state (as before the patch).

If [~drankye], [~eddyxu] or anyone else have a strong opinion, I'm happy to 
coordinate in this patch.

> ECAdmin -listPolicies will always show SystemErasureCodingPolicies state as 
> DISABLED
> 
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch, 
> HDFS-12682.03.patch, HDFS-12682.04.patch, HDFS-12682.05.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227236#comment-16227236
 ] 

Hudson commented on HDFS-12499:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13170/])
HDFS-12499. dfs.namenode.shared.edits.dir property is currently namenode (arp: 
rev b922ba7393bd97b98e90f50f01b4cc664c44adb9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
Revert "HDFS-12499. dfs.namenode.shared.edits.dir property is currently (wang: 
rev 5f681fa8216fb43dff8a3d21bf21e91d6c6f6d9c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java


> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227215#comment-16227215
 ] 

Arpit Agarwal commented on HDFS-12499:
--

Also, thanks for the quick response!

> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227206#comment-16227206
 ] 

Arpit Agarwal commented on HDFS-12499:
--

Sounds fine, I am okay with holding off forking branch-3 for now. However I 
don't recall there being any strong consensus in the community.

We cannot hold off Hadoop 4 indefinitely. It puts an unfair burden on the 
author of the first incompatible change.

> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227202#comment-16227202
 ] 

Andrew Wang commented on HDFS-11467:


Hey folks, are we planning to close on this in the next few days? Looks like 
HDFS-12682 is pretty close, so not a bad idea to rebase on top of the latest 
patch to get precommit runs and reviews started here.

> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227197#comment-16227197
 ] 

Andrew Wang commented on HDFS-12499:


I've reverted this from trunk and branch-3.0. Apologies for the churn.

One more thing, when we discussed the branching strategy for GA, the community 
consensus was to avoid Hadoop 4 until it was really needed. So, before adding 
another branch, we should start a mailing list discussion with the rationale.

> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12499:
---
Fix Version/s: (was: 3.0.0)

> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-12499:


> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227185#comment-16227185
 ] 

Andrew Wang commented on HDFS-12499:


Sorry, I was off yesterday. I just checked, and Cloudera Manager actually does 
set it in style 2.

Two questions reviewing the JIRA:
* Is there some way to make this backwards compatible, like having one conf 
override the other?
* Is the rationale for this change to simplify the configuration options?

Since this doesn't look like a critical bug fix and it does have real world 
compatibility implications, I'm going to revert this while we discuss. I was 
planning on posting an RC for 3.0.0 GA this week.


> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.0.0
>
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227182#comment-16227182
 ] 

Hudson commented on HDFS-12699:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13169/])
HDFS-12699. TestMountTable fails with Java 7. Contributed by Inigo (inigoiri: 
rev 982bd2a5bff4c490a5d62f2e4fd0d3755305349c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMountTable.java


> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Patch Available  (was: Open)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2017-10-31 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9240:
-
Status: Open  (was: Patch Available)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9240.001.patch
>
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12699:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect

2017-10-31 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227147#comment-16227147
 ] 

Akira Ajisaka commented on HDFS-12219:
--

LGTM, +1

> Javadoc for FSNamesystem#getMaxObjects is incorrect
> ---
>
> Key: HDFS-12219
> URL: https://issues.apache.org/jira/browse/HDFS-12219
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-12219.000.patch
>
>
> The Javadoc states that this represents the total number of objects in the 
> system, but it really represents the maximum allowed number of objects (as 
> correctly stated on the Javadoc for {{FSNamesystemMBean#getMaxObjects()}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-10-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12499:
-
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

I've committed this for now to trunk and branch-3.0.

[~andrew.wang], if you think this incompatibility is not appropriate for 3.0 
then I am fine with moving it to Hadoop 4 (we'd need to fork off branch-3).

> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.0.0
>
> Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch
>
>
> HDFS + Federation cluster +QJM
> dfs.shared.edits.dir property can be set as
> 1. dfs.shared.edits.dir.<> 
> 2. dfs.shared.edits.dir.<> .<>
> Configuring both ways are supported currently. Option 2 should not be 
> supported, as for a particular nameservice quorum of journal nodes should be 
> same.
> If option 2 is supported, users can configure for a nameservice Id which is 
> having two namenodes, they can configure different values for journal nodes. 
> which is incorrect.
> Example:
> 
> dfs.nameservices
> ns1,ns2
>   
>   
> dfs.ha.namenodes.ns1
> nn1,nn2
>   
>   
> dfs.ha.namenodes.ns2
> nn1,nn2
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1
>   
> 
> dfs.namenode.shared.edits.dir.ns1.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2
>   
>   
> dfs.namenode.shared.edits.dir.ns2.nn1
> 
> qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2
>   
> This jira is to discuss do we need to support 2nd option way of configuring 
> or remove it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227160#comment-16227160
 ] 

Sergey Shelukhin commented on HDFS-7878:


Thank you for all the work getting this in! I thought it wouldn't actually ever 
happen :)

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12699) TestMountTable fails with Java 7

2017-10-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227152#comment-16227152
 ] 

Íñigo Goiri commented on HDFS-12699:


Thanks [~ajisakaa]. Committed into branch-2, branch-3.0 and trunk.

> TestMountTable fails with Java 7
> 
>
> Key: HDFS-12699
> URL: https://issues.apache.org/jira/browse/HDFS-12699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch
>
>
> Some of the issues for HDFS-12620 were related to Java 7.
> In particular, we relied on the {{HashMap}} order (which is wrong).
> This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227128#comment-16227128
 ] 

Hudson commented on HDFS-7878:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13168 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13168/])
HDFS-7878. API - expose a unique file identifier. (cdouglas: rev 
d015e0bbd5416943cb4875274e67b7077c00e54b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawPathHandle.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsPathHandle.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathHandle.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusSerialization.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16227113#comment-16227113
 ] 

Chris Douglas edited comment on HDFS-7878 at 10/31/17 5:02 PM:
---

I committed this.

Thanks everyone for the reviews and feedback, particularly 
[~ste...@apache.org], [~mackrorysd], [~elgoiri], [~anu], [~andrew.wang], 
[~eddyxu], and [~jingzhao].


was (Author: chris.douglas):
I committed this.

Thanks everyone for the reviews and feedback, particularly 
[~ste...@apache.org], [~mackrorysd], [~anu], [~andrew.wang], [~eddyxu], and 
[~jingzhao].

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I committed this.

Thanks everyone for the reviews and feedback, particularly 
[~ste...@apache.org], [~mackrorysd], [~anu], [~andrew.wang], [~eddyxu], and 
[~jingzhao].

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7878) API - expose a unique file identifier

2017-10-31 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reassigned HDFS-7878:
---

Assignee: Chris Douglas  (was: Sergey Shelukhin)

> API - expose a unique file identifier
> -
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Chris Douglas
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.17.patch, 
> HDFS-7878.18.patch, HDFS-7878.19.patch, HDFS-7878.20.patch, 
> HDFS-7878.21.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >