[jira] [Commented] (HDFS-9666) Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to improve random read

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427968#comment-16427968
 ] 

Ajay Kumar commented on HDFS-9666:
--

[~yangjiandan] thanks for updating the patch. Overall it looks good. Few 
comments:
 * DFSInputStream
 ** Personally i think we should merge getBestNodeDNAddrPair and 
getBestNodeDNAddrPairRemoteSsdFirst as most of the functionality is same.
 ** Improve chooseDataNode javadoc for remoteSsdFirst (L860) to something like 
"if true read remote SSD/RAM replica first if local Disks are HDD"
 **  Improve javadoc for getBestNodeDNAddrPairRemoteSsdFirst to mention the 
selection strategy.
"Read from local node if a) If block is on SSD. b) If no other replica exist on 
SSD or RAM
 Read from remote node if local node replica is on HDD and remote node replica 
is on SSD/RAM"
 * TestDFSInputStream#testReadSsdFirstWithSsd
 ** Typo in L222  & L259. We should include expected storageId in Assert text. 
i.e  "Should be storageID3." & "Should be storageID1." respectively.
 ** Also we should assert expected storage type in both tests.

> Enable hdfs-client to read even remote SSD/RAM prior to local disk replica to 
> improve random read
> -
>
> Key: HDFS-9666
> URL: https://issues.apache.org/jira/browse/HDFS-9666
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-9666.0.patch, HDFS-9666.001.patch, 
> HDFS-9666.002.patch, HDFS-9666.003.patch, HDFS-9666.004.patch
>
>
> We want to improve random read performance of HDFS for HBase, so enabled the 
> heterogeneous storage in our cluster. But there are only ~50% of datanode & 
> regionserver hosts with SSD. we can set hfile with only ONE_SSD not ALL_SSD 
> storagepolicy and the regionserver on none-SSD host can only read the local 
> disk replica . So we developed this feature in hdfs client to read even 
> remote SSD/RAM prior to local disk replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427961#comment-16427961
 ] 

genericqa commented on HDFS-13342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
35s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} 

[jira] [Commented] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-04-05 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427910#comment-16427910
 ] 

Surendra Singh Lilhore commented on HDFS-13292:
---

Thanks [~RANith] for patch and [~shahrs87] for review.

Tests are failing locally, Ranith can you check once...
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.cli.TestCryptoAdminCLI
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 6.626 s 
<<< FAILURE! - in org.apache.hadoop.cli.TestCryptoAdminCLI
[ERROR] testAll(org.apache.hadoop.cli.TestCryptoAdminCLI)  Time elapsed: 6.499 
s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.lang.NoSuchMethodError): 
org.apache.hadoop.hdfs.protocol.DatanodeInfo.setNumBlocks(I)V
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.datanodeReport(FSNamesystem.java:4341)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDatanodeReport(NameNodeRpcServer.java:1208)
{noformat}

> Crypto command should give proper exception when key is already exist for 
> zone directory
> 
>
> Key: HDFS-13292
> URL: https://issues.apache.org/jira/browse/HDFS-13292
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, kms
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13292.001.patch, HDFS-13292.002.patch
>
>
> {{Scenario:}}
>  # Create a Dir
>  # Create EZ for the above dir with Key1
>  # Again you can try to create ZONE for same directory with Diff Key i.e Key2
> {noformat}
> hadoopclient> hadoop key list
> Listing keys for KeyProvider: 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
> key2
> key1
> hadoopclient> hdfs dfs -mkdir /kms
> hadoopclient> hdfs dfs -put bigdata_env /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient> hdfs dfs -rmr /kms/file1
> rmr: DEPRECATED: Please use '-rm -r' instead.
> Deleted /kms/file1
> hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
> Added encryption zone /kms
> hadoopclient> hdfs crypto -createZone -keyName key2 -path /kms
> RemoteException: Attempt to create an encryption zone for a non-empty 
> directory.
> hadoopclient>
>  {noformat}
> Actual Output:
> ===
> {{Exception should be Like Dir already having the ZONE will not allow to 
> create new ZONE on this Dir}}
> Expected Output:
> =
> {{RemoteException:Attempt to create an encryption zone for non-empty 
> directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal

2018-04-05 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427895#comment-16427895
 ] 

Tao Jie commented on HDFS-13279:


[~ajayydv] Thank you for your comments.
{quote}
Don't remove the default impl entry from core-default.xml.
Similarly we can do this with almost no change in  Network Topology, 
DFSNetworkTopology and TestBalancerWithNodeGroup. We can  update 
DatanodeManager#init to instantiate according to value of "net.topology.impl".
{quote}
Actually in original patch based on 2.8.2, we don't need to modify 
core-default.xml, Network Topology, TestBalancerWithNodeGroup. It is a little 
tricky in HDFS-11998 which set DFSNetworkTopology as default topology 
implementation even though {{net.topology.impl}} is set to NetworkTopology. In 
HDFS-11530, once {{dfs.use.dfs.network.topology}} is true, the implementation 
is hard code to {{DFSNetworkTopology}} no matter what {{net.topology.impl}} is. 
So we have to modify the behavior if we need to add a new topology 
implementation and let it work. Maybe we could fix it in another Jira?
{quote}
L43, chooseDataNode: Instead of choosing datanode twice we can just call 
super.chooseRandom if we overide chooseRandom in 
NetworkTopologyWithWeightedRack. This way we can avoid calling chooseRandom 
twice.
{quote}
It is OK if we use {{first choose a rack then choose a node}} logic in 
{{chooseRandom}}. The purpose of a twice choosing is to mostly reuse the 
current choosing logic, which make the code more easier:)

> Datanodes usage is imbalanced if number of nodes per rack is not equal
> --
>
> Key: HDFS-13279
> URL: https://issues.apache.org/jira/browse/HDFS-13279
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13279.001.patch, HDFS-13279.002.patch, 
> HDFS-13279.003.patch, HDFS-13279.004.patch, HDFS-13279.005.patch
>
>
> In a Hadoop cluster, number of nodes on a rack could be different. For 
> example, we have 50 Datanodes in all and 15 datanodes per rack, it would 
> remain 5 nodes on the last rack. In this situation, we find that storage 
> usage on the last 5 nodes would be much higher than other nodes.
>  With the default blockplacement policy, for each block, the first 
> replication has the same probability to write to each datanode, but the 
> probability for the 2nd/3rd replication to write to the last 5 nodes would 
> much higher than to other nodes. 
>  Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block 
> would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep 
> is on rack1(15 reps) would send equally to other 35 nodes and each nodes 
> receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on 
> rack4(5 nodes) would receive 1.29 replications in all, while other node would 
> receive 0.97 reps.
> ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)||
> |From rack1|-|15/35=0.43|0.43|0.43|
> |From rack2|0.43|-|0.43|0.43|
> |From rack3|0.43|0.43|-|0.43|
> |From rack4|5/45=0.11|0.11|0.11|-|
> |Total|0.97|0.97|0.97|1.29|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427861#comment-16427861
 ] 

Ajay Kumar commented on HDFS-13384:
---

[~elgoiri] thanks for working on this. Overall patch looks good. Few comments 
on patch v1:
 * TestRouterRPCClientRetries
 **  Possibly remove L196-198, since jsonString0 is used for assertion later?
 ** Do we need to reset slow NN. i.e L239-241. As teardown will anyway stop the 
cluster.
 ** Also seems removing line L222 L238 makes test shorter and possibly tests 
same thing. 
 ** Comment at line L193 mention 12 nodes. But i think there are 4 datanodes 
and 4 namenodes.
 ** Rename setNNSlow to simulateSlowNN
 * Introduce some reasonable timeout through junit rule.
 * Rename SubclusterTimeoutException to SubClusterTimeoutException or 
ClusterTimeoutException

I thought making subcluster0 slow will result in no of Live datanodes to 2 but 
that is not the case. Would appreciate if you can share what i am missing in 
this case.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427861#comment-16427861
 ] 

Ajay Kumar edited comment on HDFS-13384 at 4/6/18 2:21 AM:
---

[~elgoiri] thanks for working on this. Overall patch looks good. Few comments 
on patch v0:
 * TestRouterRPCClientRetries
 **  Possibly remove L196-198, since jsonString0 is used for assertion later?
 ** Do we need to reset slow NN. i.e L239-241. As teardown will anyway stop the 
cluster.
 ** Also seems removing line L222 L238 makes test shorter and possibly tests 
same thing. 
 ** Comment at line L193 mention 12 nodes. But i think there are 4 datanodes 
and 4 namenodes.
 ** Rename setNNSlow to simulateSlowNN
 * Introduce some reasonable timeout through junit rule.
 * Rename SubclusterTimeoutException to SubClusterTimeoutException or 
ClusterTimeoutException

I thought making subcluster0 slow will result in no of Live datanodes to 2 but 
that is not the case. Would appreciate if you can share what i am missing in 
this case.


was (Author: ajayydv):
[~elgoiri] thanks for working on this. Overall patch looks good. Few comments 
on patch v1:
 * TestRouterRPCClientRetries
 **  Possibly remove L196-198, since jsonString0 is used for assertion later?
 ** Do we need to reset slow NN. i.e L239-241. As teardown will anyway stop the 
cluster.
 ** Also seems removing line L222 L238 makes test shorter and possibly tests 
same thing. 
 ** Comment at line L193 mention 12 nodes. But i think there are 4 datanodes 
and 4 namenodes.
 ** Rename setNNSlow to simulateSlowNN
 * Introduce some reasonable timeout through junit rule.
 * Rename SubclusterTimeoutException to SubClusterTimeoutException or 
ClusterTimeoutException

I thought making subcluster0 slow will result in no of Live datanodes to 2 but 
that is not the case. Would appreciate if you can share what i am missing in 
this case.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-05 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427859#comment-16427859
 ] 

Shashikant Banerjee commented on HDFS-13342:


Thanks [~elek] for the review comments. As per your review comments and offline 
discussion, i have updated the patch.

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13342) Ozone: Fix the class names in Ozone Script

2018-04-05 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13342:
---
Attachment: HDFS-13342-HDFS-7240.004.patch

> Ozone: Fix the class names in Ozone Script
> --
>
> Key: HDFS-13342
> URL: https://issues.apache.org/jira/browse/HDFS-13342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13342-HDFS-7240.001.patch, 
> HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, 
> HDFS-13342-HDFS-7240.004.patch
>
>
> The Ozone (oz script) has wrong classnames for freon etc, As a result of 
> which freon cannot be started from command line. This Jira proposes to fix 
> all these. The oz script will be renamed to Ozone as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427838#comment-16427838
 ] 

genericqa commented on HDFS-13399:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
22s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs-client in HDFS-12943 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs in HDFS-12943 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 15s{color} | {color:orange} root: The patch generated 8 new + 455 unchanged 
- 0 fixed = 463 total (was 455) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 18 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-client generated 0 
new + 0 unchanged - 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 1 
unchanged - 2 fixed = 1 total (was 3) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | 

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427811#comment-16427811
 ] 

Takanobu Asanuma commented on HDFS-13353:
-

Thanks for committing it!.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13353.1.patch, HDFS-13353.2.patch, 
> HDFS-13353.3.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> 

[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427785#comment-16427785
 ] 

genericqa commented on HDFS-13237:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917793/HDFS-13237.003.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux b169fa85117d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6cf023f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 303 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23801/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch, HDFS-13237.003.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13406) Encryption zone creation fails with key names that are generally valid

2018-04-05 Thread David Tucker (JIRA)
David Tucker created HDFS-13406:
---

 Summary: Encryption zone creation fails with key names that are 
generally valid
 Key: HDFS-13406
 URL: https://issues.apache.org/jira/browse/HDFS-13406
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: kms, namenode
Affects Versions: 3.0.0
 Environment: non-Kerberized
Reporter: David Tucker


Under load (up to 24 clients trying to append to a non-existent file in 
separate encryption zones simultaneously), the KMS returns a 400 for a 
character that is not present in the path or key name. For example,
{code:java}
IOException: ERROR_APPLICATION: HTTP status [400], message [Illegal character 
0xA]
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:174)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:540)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:536)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:501)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:877)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:393)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:390)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getMetadata(LoadBalancingKMSClientProvider.java:390)
at 
org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:124)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7002)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2036)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1448)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
{code}
I have a number of path/key pairs that were used to trigger this, but I've 
tried hard-coding one of them, and it seems to work fine:
{code}
('Path:', 
u'/chai_test37791178-e643-4ade-bbea-3a46a368eb01-\u0531\u308b\ufea1\u10d6\ufe8f\uc738\u2116\u05e1\u0562\u0be9\u050e/chai_testdc99561d-c64b-48c2-a1b2-203eecd424b0-\u0718\u05f1\u0133\u06f3\u2083\u316a\u0ec4\u07cb\u22e9\xa5\u07d0\u318d\xbc\u0e01\u02e8\u0277\u316a\u2030\u3113\u0e81\u9fa5/chai_test8f897e3a-fed6-4e5a-ae3e-524c73b760cc-\u05d0\u10fb\u3405\u0105\u2100\u0e0d\u0256\u9fa5\u0baa\u10f5\u0429\u2122\u05d0\u3106\u02b1\u3113\u2014\u2122\u05d0\ud7a3')
('Key:', 

[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427725#comment-16427725
 ] 

Erik Krogen commented on HDFS-13399:


Hey [~zero45], I took a first look at the patch. I think the approach is clean. 
{{DFSClient}} is the sensible place to store the single {{AlignmentContext}}, 
then it can pass it down to its proxies / ProxyProviders. All of the method 
overrides are not exactly great, but in line with what's already happening in 
this area of the code.

A few comments:
* We currently don't have a version of 
{{NameNodeProxies#createProxy(Configuration, URI, Class, AtomicBoolean)}} 
which accepts an alignment context, should we add one?
* Why is one {{getProxy()}} method removed from {{ProtobufRpcEngine}}?
* Why is {{IPFailoverProxyProvider}} passing null as its assignment context 
instead of {{getAlignmentContext()}} ?

Also a few notes that I had to wrap my head around in the hope that others 
won't get stuck on it:
* The alignment context is necessary even when we create a non-HA proxy because 
this is used internally by the HA proxies.
* The alignment context is necessary in both the ProxyProvider and the proxy 
itself, since the ProxyProvider will go on to create additional proxies.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-05 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427723#comment-16427723
 ] 

Wei Yan commented on HDFS-13237:


[~elgoiri] thanks for the fix. +1 on[^HDFS-13237.003.patch]

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch, HDFS-13237.003.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427720#comment-16427720
 ] 

Íñigo Goiri commented on HDFS-13237:


Thanks [~ywskycn] for the comments; tackled them in  [^HDFS-13237.003.patch].
Regarding the RANDOM approach, I added an extra paragraph explaining the use we 
internally give to that approach.

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch, HDFS-13237.003.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13237:
---
Attachment: HDFS-13237.003.patch

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch, HDFS-13237.003.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427693#comment-16427693
 ] 

Ajay Kumar edited comment on HDFS-11201 at 4/5/18 10:33 PM:


[~gsohn] Patch doesn't apply to trunk anymore, mind rebasing the patch?


was (Author: ajayydv):
[~gsohn] Patch doesn't apply to trunk anymore, Mind rebasing the patch?

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427693#comment-16427693
 ] 

Ajay Kumar commented on HDFS-11201:
---

[~gsohn] Patch doesn't apply to trunk anymore, Mind rebasing the patch?

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13399:
---
Status: Patch Available  (was: Open)

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13399:
---
Target Version/s: HDFS-12943

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427665#comment-16427665
 ] 

genericqa commented on HDFS-13405:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 144 new or modified 
test files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-tools/hadoop-tools-dist hadoop-tools 
hadoop-dist . hadoop-hdsl hadoop-ozone hadoop-ozone/acceptance-test 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} server in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} framework in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-ozone in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
17s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 26m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-05 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427631#comment-16427631
 ] 

Gabor Bota commented on HDFS-13363:
---

Thanks for the suggestion [~xiaochen]! Are you sure that we can use those input 
parameters? And which one should we use for this purpose? I think getting the 
full path name this way will return the path with the ACL issue.

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13347:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Cache datanode reports
> ---
>
> Key: HDFS-13347
> URL: https://issues.apache.org/jira/browse/HDFS-13347
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, 
> HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, 
> HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch
>
>
> Getting the datanode reports is an expensive operation and can be executed 
> very frequently by the UI and watchdogs. We should cache this information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11201:
-
Fix Version/s: (was: 3.0.2)

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13198:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13198.000.patch, HDFS-13198.001.patch, 
> HDFS-13198.002.patch, HDFS-13198.003.patch, HDFS-13198.004.patch, 
> HDFS-13198.005.patch
>
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13289:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Fix For: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13289.001.patch, HDFS-13289.002.patch, 
> HDFS-13289.003.patch, HDFS-13289.004.patch, HDFS-13289.005.patch, 
> HDFS-13289.006.patch
>
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13204:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Assignee: liuhongtong
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, 
> HDFS-13204.003.patch, HDFS-13204.004.patch, HDFS-13204.005.patch, 
> HDFS-13204.006.patch, HDFS-13204.007.patch, HDFS-13204.008.patch, 
> Routers.png, Subclusters.png, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png, 
> image-2018-03-23-18-06-54-354.png, image-2018-03-26-10-10-10-930.png, 
> image-2018-03-26-10-21-24-171.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13318:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Fix FindBugs in hadoop-hdfs-rbf
> 
>
> Key: HDFS-13318
> URL: https://issues.apache.org/jira/browse/HDFS-13318
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13318.001.patch
>
>
> hadoop-hdfs-rbf has 3 FindBug warnings:
> * NamenodePriorityComparator should be serializable
> * RemoteMethod.getTypes() may expose internal representation
> * RemoteMethod may store mutable objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13195:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13184) RBF: Improve the unit test TestRouterRPCClientRetries

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13184:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Improve the unit test TestRouterRPCClientRetries
> -
>
> Key: HDFS-13184
> URL: https://issues.apache.org/jira/browse/HDFS-13184
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13184.001.patch, HDFS-13184.002.patch, 
> HDFS-13184.003.patch
>
>
> From the proposal in this 
> https://issues.apache.org/jira/browse/HDFS-13119?focusedCommentId=16370421=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16370421,
>   this will speed the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13230) RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13230:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: ConnectionManager's cleanup task will compare each pool's own active 
> conns with its total conns
> 
>
> Key: HDFS-13230
> URL: https://issues.apache.org/jira/browse/HDFS-13230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13230.000.patch, HDFS-13230.001.patch
>
>
> In the cleanup task:
> {code:java}
> long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
> int total = pool.getNumConnections();
> int active = getNumActiveConnections();
> if (timeSinceLastActive > connectionCleanupPeriodMs ||
> {code}
> the 3rd line should be pool.getNumActiveConnections()
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13175) Add more information for checking argument in DiskBalancerVolume

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13175:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Add more information for checking argument in DiskBalancerVolume
> 
>
> Key: HDFS-13175
> URL: https://issues.apache.org/jira/browse/HDFS-13175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: diskbalancer
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13175.00.patch, HDFS-13175.01.patch
>
>
> We have seen the following stack in production
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123)
>   at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107)
> {code}
> raised from 
> {code}
>  public void setUsed(long dfsUsedSpace) {
> Preconditions.checkArgument(dfsUsedSpace < this.getCapacity());
> this.used = dfsUsedSpace;
>   }
> {code}
> However, the datanode reports at the very moment were not captured. We should 
> add more information into the stack trace to better diagnose the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11600:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, 
> HDFS-11600.003.patch, HDFS-11600.004.patch, HDFS-11600.005.patch, 
> HDFS-11600.006.patch, HDFS-11600.007.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13232:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13314:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> NameNode should optionally exit if it detects FsImage corruption
> 
>
> Key: HDFS-13314
> URL: https://issues.apache.org/jira/browse/HDFS-13314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, 
> HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch
>
>
> The NameNode should optionally exit after writing an FsImage if it detects 
> the following kinds of corruptions:
> # INodeReference pointing to non-existent INode
> # Duplicate entries in snapshot deleted diff list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13215) RBF: Move Router to its own module

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13215:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13215.000.patch, HDFS-13215.001.patch, 
> HDFS-13215.002.patch, HDFS-13215.003.patch, HDFS-13215.004.patch, 
> HDFS-13215.005.patch, HDFS-13215.006.patch, HDFS-13215.007.patch, 
> HDFS-13215.008.patch, HDFS-13215.009.patch, HDFS-13215_branch-2.001.patch, 
> HDFS-13215_branch-2.002.patch, HDFS-13215_branch-2.9.001.patch, 
> HDFS-13215_branch-2.9.002.patch, HDFS-13215_branch-2.9.003.patch, 
> HDFS-13215_branch-2.9.004.patch, HDFS-13215_branch-3.0.001.patch, 
> HDFS-13215_branch-3.0.002.patch, HDFS-13215_branch-3.1.001.patch
>
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-05 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13353:
---
   Resolution: Fixed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. Thanks 
for the contribution [~tasanuma0829], and [~elgoiri] for the review.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13353.1.patch, HDFS-13353.2.patch, 
> HDFS-13353.3.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> 

[jira] [Updated] (HDFS-11187) Optimize disk access for last partial chunk checksum of Finalized replica

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11187:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Optimize disk access for last partial chunk checksum of Finalized replica
> -
>
> Key: HDFS-11187
> URL: https://issues.apache.org/jira/browse/HDFS-11187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>
> Attachments: HDFS-11187-branch-2.001.patch, 
> HDFS-11187-branch-2.002.patch, HDFS-11187-branch-2.003.patch, 
> HDFS-11187-branch-2.004.patch, HDFS-11187-branch-2.7.001.patch, 
> HDFS-11187.001.patch, HDFS-11187.002.patch, HDFS-11187.003.patch, 
> HDFS-11187.004.patch, HDFS-11187.005.patch
>
>
> The patch at HDFS-11160 ensures BlockSender reads the correct version of 
> metafile when there are concurrent writers.
> However, the implementation is not optimal, because it must always read the 
> last partial chunk checksum from disk while holding FsDatasetImpl lock for 
> every reader. It is possible to optimize this by keeping an up-to-date 
> version of last partial checksum in-memory and reduce disk access.
> I am separating the optimization into a new jira, because maintaining the 
> state of in-memory checksum requires a lot more work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12884:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
> ---
>
> Key: HDFS-12884
> URL: https://issues.apache.org/jira/browse/HDFS-12884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>Assignee: chencan
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>
> Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, 
> HDFS-12884.003.patch
>
>
> {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to 
> {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as 
> {{BlockInfo}}, so this will avoid unnecessary casts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12792:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Test Router-based federation using HDFSContract
> 
>
> Key: HDFS-12792
> URL: https://issues.apache.org/jira/browse/HDFS-12792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, 
> HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch, 
> HDFS-12792.005.patch, HDFS-12792.006.patch, HDFS-12792.007.patch, 
> HDFS-12792.008.patch, HDFS-12792.009.patch, HDFS-12792.010.patch, 
> HDFS-12792.011.patch, HDFS-12792.012.patch, HDFS-12792.013.patch
>
>
> Router-based federation should support HDFSContract.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13291) RBF: Implement available space based OrderResolver

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13291:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Implement available space based OrderResolver
> --
>
> Key: HDFS-13291
> URL: https://issues.apache.org/jira/browse/HDFS-13291
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13291-branch-2.001.patch, HDFS-13291.001.patch, 
> HDFS-13291.002.patch, HDFS-13291.003.patch, HDFS-13291.004.patch, 
> HDFS-13291.005.patch, HDFS-13291.006.patch, HDFS-13291.007.patch, 
> HDFS-13291.008.patch, HDFS-13291.009.patch
>
>
> Implement available space based OrderResolver, this type resolver will 
> benefit for balancing the data across subclusters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13250:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: https://issues.apache.org/jira/browse/HDFS-13250
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13250.000-addendum-branch-2.patch, 
> HDFS-13250.000.patch, HDFS-13250.001.patch, HDFS-13250.002.patch, 
> HDFS-13250.003.patch, HDFS-13250.004.patch, HDFS-13250.005.patch
>
>
> HDFS-13124 introduces the concept of mount points spanning multiple 
> subclusters. The Router should distribute the requests across these 
> subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11942:
-
Fix Version/s: (was: 3.0.2)

> make new  chooseDataNode policy  work in more operation like seek, fetch
> 
>
> Key: HDFS-11942
> URL: https://issues.apache.org/jira/browse/HDFS-11942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha3
>Reporter: Fangyuan Deng
>Priority: Major
> Attachments: HDFS-11942.0.patch, HDFS-11942.1.patch, 
> ssd-first-disable(default).png, ssd-first-enable.png
>
>
> in default policy, if a file is ONE_SSD, client will prior read the local 
> disk replica rather than the remote ssd replica.
> but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than 
>  the local disk.
> HDFS-9666 give us a patch,  but the code is not complete and not updated for 
> a long time.
> this sub-task issue give a complete patch and 
> we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 
> 1.2T HDD, 800G SSD(intel P3600) ].
> with this feather, throughput of hbase table(ONE_SSD) is double of which 
> without this feather



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13087) Snapshotted encryption zone information should be immutable

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13087:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Snapshotted encryption zone information should be immutable
> ---
>
> Key: HDFS-13087
> URL: https://issues.apache.org/jira/browse/HDFS-13087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, 
> HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch
>
>
> Snapshots are supposed to be immutable and read only, so the EZ settings 
> which in a snapshot path shouldn't change when the origin encryption zone 
> changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13114) CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13114:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path
> 
>
> Key: HDFS-13114
> URL: https://issues.apache.org/jira/browse/HDFS-13114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13114.001.patch
>
>
> The {{crypto -reencryptZone  -path }} command takes in a path 
> argument. But when creating {{HdfsAdmin}} object, it takes the defaultFs 
> instead of resolving from the path. This causes the following exception if 
> the authority component in path does not match the authority of default Fs.
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://mycluster-node-1:8020/zone1
> IllegalArgumentException: Wrong FS: hdfs://mycluster-node-1:8020/zone1, 
> expected: hdfs://ns1{code}
> {code:java}
> $ hdfs crypto -reencryptZone -start -path hdfs://ns2/zone2
> IllegalArgumentException: Wrong FS: hdfs://ns2/zone2, expected: 
> hdfs://ns1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427536#comment-16427536
 ] 

Hudson commented on HDFS-13353:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13931 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13931/])
HDFS-13353. RBF: TestRouterWebHDFSContractCreate failed. Contributed by (weiy: 
rev 3121e8c29361cb560df29188e1cd1061a5fc34c4)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCreateTest.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/resources/contract/webhdfs.xml


> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch, HDFS-13353.2.patch, 
> HDFS-13353.3.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> 

[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12773:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, 
> HDFS-12773.002.patch, HDFS-12773.003.patch, HDFS-12773.004.patch, 
> HDFS-12773.005.patch, HDFS-12773.006.patch, HDFS-12773.007.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13253) RBF: Quota management incorrect parent-child relationship judgement

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13253:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Quota management incorrect parent-child relationship judgement
> ---
>
> Key: HDFS-13253
> URL: https://issues.apache.org/jira/browse/HDFS-13253
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13253.001.patch, HDFS-13253.002.patch, 
> HDFS-13253.003.patch
>
>
> The Router quota management does not check for parent-child relation 
> properly. Similar to HDFS-13233.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13224) RBF: Resolvers to support mount points across multiple subclusters

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13224:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Resolvers to support mount points across multiple subclusters
> --
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13224-branch-2.000.patch, HDFS-13224.000.patch, 
> HDFS-13224.001.patch, HDFS-13224.002.patch, HDFS-13224.003.patch, 
> HDFS-13224.004.patch, HDFS-13224.005.patch, HDFS-13224.006.patch, 
> HDFS-13224.007.patch, HDFS-13224.008.patch, HDFS-13224.009.patch, 
> HDFS-13224.010.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12886) Ignore minReplication for block recovery

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12886:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Ignore minReplication for block recovery
> 
>
> Key: HDFS-12886
> URL: https://issues.apache.org/jira/browse/HDFS-12886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-12886.001.patch, HDFS-12886.002.patch, 
> HDFS-12886_branch-2.000.patch
>
>
> Ignore minReplication for blocks that went through recovery, and allow NN to 
> complete them and replicate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13212:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Assignee: Weiwei Wu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, 
> HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12781:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13109:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch, 
> HDFS-13109.006.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13040) Kerberized inotify client fails despite kinit properly

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13040:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Kerberized inotify client fails despite kinit properly
> --
>
> Key: HDFS-13040
> URL: https://issues.apache.org/jira/browse/HDFS-13040
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: Kerberized, HA cluster, iNotify client, CDH5.10.2
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.3
>
> Attachments: HDFS-13040.001.patch, HDFS-13040.02.patch, 
> HDFS-13040.03.patch, HDFS-13040.04.patch, HDFS-13040.05.patch, 
> HDFS-13040.06.patch, HDFS-13040.07.patch, HDFS-13040.branch-2.01.patch, 
> HDFS-13040.half.test.patch, TestDFSInotifyEventInputStreamKerberized.java, 
> TransactionReader.java
>
>
> This issue is similar to HDFS-10799.
> HDFS-10799 turned out to be a client side issue where client is responsible 
> for renewing kerberos ticket actively.
> However we found in a slightly setup even if client has valid Kerberos 
> credentials, inotify still fails.
> Suppose client uses principal h...@example.com, 
>  namenode 1 uses server principal hdfs/nn1.example@example.com
>  namenode 2 uses server principal hdfs/nn2.example@example.com
> *After Namenodes starts for longer than kerberos ticket lifetime*, the client 
> fails with the following error:
> {noformat}
> 18/01/19 11:23:02 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@gce.cloudera.com (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): We 
> encountered an error reading 
> https://nn2.example.com:8481/getJournal?jid=ns1=8662=-60%3A353531113%3A0%3Acluster3,
>  
> https://nn1.example.com:8481/getJournal?jid=ns1=8662=-60%3A353531113%3A0%3Acluster3.
>   During automatic edit log failover, we noticed that all of the remaining 
> edit log streams are shorter than the current one!  The best remaining edit 
> log ends at transaction 8683, but we thought we could read up to transaction 
> 8684.  If you continue, metadata will be lost forever!
> at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.readOp(NameNodeRpcServer.java:1701)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEditsFromTxid(NameNodeRpcServer.java:1763)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEditsFromTxid(AuthorizationProviderProxyClientProtocol.java:1011)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEditsFromTxid(ClientNamenodeProtocolServerSideTranslatorPB.java:1490)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> Typically if NameNode has an expired Kerberos ticket, the error handling for 
> the typical edit log tailing would let NameNode to relogin with its own 
> Kerberos principal. However, when inotify uses the same code path to retrieve 
> edits, since the current user is the inotify client's principal, unless 
> client uses the same principal as the NameNode, NameNode can't do it on 
> behalf of the client.
> Therefore, a more appropriate approach is to use proxy user so that NameNode 
> can retrieving edits on behalf of the client.
> I will attach a patch to fix it. This patch has been verified to work for a 
> CDH5.10.2 cluster, however it seems impossible to craft a unit test for this 
> fix because the way Hadoop UGI handles Kerberos credentials (I can't have a 
> single process that logins as two Kerberos principals simultaneously and let 
> them establish connection)
> A possible workaround is for the inotify client to use the active 

[jira] [Updated] (HDFS-12723) TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12723:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing 
> consistently.
> 
>
> Key: HDFS-12723
> URL: https://issues.apache.org/jira/browse/HDFS-12723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-12723.000.patch, HDFS-12723.001.patch, 
> HDFS-12723.002.patch
>
>
> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks is timing 
> out consistently on my local machine.
> {noformat}
> Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.405 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> testReadFileWithMissingBlocks(org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks)
>   Time elapsed: 132.171 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for /foo to have all 
> the internalBlocks
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:295)
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:256)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:98)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:82)
> Results :
> Tests in error: 
>   
> TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks:82->readFileWithMissingBlocks:98
>  » Timeout
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13233) RBF: MountTableResolver doesn't return the correct mount point of the given path

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13233:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: MountTableResolver doesn't return the correct mount point of the given 
> path
> 
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: wangzhiyuan
>Assignee: wangzhiyuan
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch, 
> HDFS-13233.003.patch, HDFS-13233.004.patch, HDFS-13233.005.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12455:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots, webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13119) RBF: Manage unavailable clusters

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13119:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Manage unavailable clusters
> 
>
> Key: HDFS-13119
> URL: https://issues.apache.org/jira/browse/HDFS-13119
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13119.001.patch, HDFS-13119.002.patch, 
> HDFS-13119.003.patch, HDFS-13119.004.patch, HDFS-13119.005.patch, 
> HDFS-13119.006.patch
>
>
> When a federated cluster has one of the subcluster down, operations that run 
> in every subcluster ({{RouterRpcClient#invokeAll()}}) may take all the RPC 
> connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13199) RBF: Fix the hdfs router page missing label icon issue

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13199:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Fix the hdfs router page missing label icon issue
> --
>
> Key: HDFS-13199
> URL: https://issues.apache.org/jira/browse/HDFS-13199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13199.001.patch
>
>
> This bug is a typo error.
> decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13225) StripeReader#checkMissingBlocks() 's IOException info is incomplete

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13225:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> StripeReader#checkMissingBlocks() 's IOException info is incomplete
> ---
>
> Key: HDFS-13225
> URL: https://issues.apache.org/jira/browse/HDFS-13225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.1.0, 3.2.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13225.001.patch, HDFS-13225.002.patch
>
>
> When the file's ErasureCodingPolicy is XOR-3-2-128k, then stop 3 datanodes 
> which were used by the block. With the following op(read the file), the 
> exception message was incomplete, because of the class of LocatedBlocks's 
> info was not displayed complete.
>  
> {color:#707070}hadoop@EC102:/home/lufei> hadoop fs -get 
> /lufei/fsimage_00268172191_140 test112{color}
> {color:#707070}get: 3 missing blocks, the stripe is: AlignedStripe(Offset=0, 
> length=131072, fetchedChunksNum=0, missingChunksNum=3); locatedBlocks is: 
> {color:#d04437}LocatedBlocks{{color}{color}
> {color:#707070}hadoop@EC102:/home/lufei>{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13190) Document WebHDFS support for snapshot diff

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13190:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Document WebHDFS support for snapshot diff
> --
>
> Key: HDFS-13190
> URL: https://issues.apache.org/jira/browse/HDFS-13190
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13190.001.patch, HDFS-13190.002.patch
>
>
> This ticket is opened to document the WebHDFS: Add support for snasphot diff 
> from HDFS-13052 in WebHDFS.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11399) Many tests fails in Windows due to injecting disk failures

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11399:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Many tests fails in Windows due to injecting disk failures
> --
>
> Key: HDFS-11399
> URL: https://issues.apache.org/jira/browse/HDFS-11399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-11399-branch-2.000.patch, HDFS-11399.001.patch, 
> HDFS-11399.002.patch
>
>
> Found many tests run failed in Windows due to using 
> {{DataNodeTestUtils#injectDataDirFailure}}. One of the failure test:
> {code}
> java.io.IOException: Failed to rename 
> D:\work-project\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data2
>  to 
> D:\work-project\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data2.origin.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.injectDataDirFailure(DataNodeTestUtils.java:156)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean.testStorageTypeStatsWhenStorageFailed(TestBlockStatsMXBean.java:176)
> {code}
> The root cause of this is that the test method uses 
> {{DataNodeTestUtils#injectDataDirFailure}} to injects disk failures but 
> however missing the compare operation {{assumeNotWindows}}. Currently 
> {{DataNodeTestUtils#injectDataDirFailure}} is not supported in Windows then 
> the test fails.
> It would be better to add {{assumeNotWindows}} into 
> {{DataNodeTestUtils#injectDataDirFailure}} in case we forget to add this in 
> test methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13187) RBF: Fix Routers information shown in the web UI

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13187:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Fix Routers information shown in the web UI
> 
>
> Key: HDFS-13187
> URL: https://issues.apache.org/jira/browse/HDFS-13187
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13187.000.patch, HDFS-13187.001.patch, 
> HDFS-13187.002.patch, HDFS-13187.003.patch, 
> image-2018-02-23-09-23-29-495.png, router.png
>
>
> HDFS-13043 added component to existing web UI to include router information 
> there. But currently the UI doesn't show correctly. The new table is shown at 
> the bottom of each tab, and it doesn't have data there. It missed some code 
> in the .html and .js side.
>  
> A quick screen shot of what it shows currently (at the bottom of each page 
> tag):
> !image-2018-02-23-09-23-29-495.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13240) RBF: Update some inaccurate document descriptions

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13240:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Update some inaccurate document descriptions
> -
>
> Key: HDFS-13240
> URL: https://issues.apache.org/jira/browse/HDFS-13240
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13240.001.patch, HDFS-13240.002.patch
>
>
> In RBF doc, there are some places not describing accurately 
> (https://issues.apache.org/jira/browse/HDFS-13214?focusedCommentId=16389026=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16389026).
>  This will mislead users sometimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13239:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Fix non-empty dir warning message when setting default EC policy
> 
>
> Key: HDFS-13239
> URL: https://issues.apache.org/jira/browse/HDFS-13239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, 
> HDFS-13239.02.patch, HDFS-13239.03.patch, HDFS-13239.04.patch
>
>
> When EC policy is set on a non-empty directory, the following warning message 
> is given:
> {code}
> $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> {code}
> When we do not specify the -policy parameter when setting EC policy on a 
> directory, it takes the default EC policy. Setting default EC policy in this 
> way on a non-empty directory gives the following warning message:
> {code}
> $hdfs ec -setPolicy -path /ec2
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to null
> {code}
> Notice that the warning message in the 2nd case has the ecPolicy name shown 
> as null. We should instead give the default EC policy name in this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13099) RBF: Use the ZooKeeper as the default State Store

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13099:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Use the ZooKeeper as the default State Store
> -
>
> Key: HDFS-13099
> URL: https://issues.apache.org/jira/browse/HDFS-13099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: incompatible, incompatibleChange
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13099.001.patch, HDFS-13099.002.patch, 
> HDFS-13099.003.patch, HDFS-13099.004.patch, HDFS-13099.005.patch, 
> HDFS-13099.006.patch
>
>
> Currently the State Store Driver relevant settings only written in its 
> implement classes.
> {noformat}
> public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl {
> ...
>   /** Configuration keys. */
>   public static final String FEDERATION_STORE_ZK_DRIVER_PREFIX =
>   DFSConfigKeys.FEDERATION_STORE_PREFIX + "driver.zk.";
>   public static final String FEDERATION_STORE_ZK_PARENT_PATH =
>   FEDERATION_STORE_ZK_DRIVER_PREFIX + "parent-path";
>   public static final String FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT =
>   "/hdfs-federation";
> ..
> {noformat}
> Actually, they should be moved into class {{DFSConfigKeys}} and documented in 
> file {{hdfs-default.xml}}. This will help more users know these settings and 
> know how to use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12933) Improve logging when DFSStripedOutputStream failed to write some blocks

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12933:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Improve logging when DFSStripedOutputStream failed to write some blocks
> ---
>
> Key: HDFS-12933
> URL: https://issues.apache.org/jira/browse/HDFS-12933
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: chencan
>Priority: Minor
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-12933.001.patch
>
>
> Currently if there are less DataNodes than the erasure coding policy's (# of 
> data blocks + # of parity blocks), the client sees this:
> {noformat}
> 09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
> 09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 
> corrupt blocks.
> {noformat}
> The 1st line is good. The 2nd line may be confusing to end users. We should 
> investigate the error and be more general / accurate. Maybe something like 
> 'failed to read x blocks'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13296) GenericTestUtils generates paths with drive letter in Windows and fail webhdfs related test cases

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13296:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> GenericTestUtils generates paths with drive letter in Windows and fail 
> webhdfs related test cases
> -
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13296.000.patch
>
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12070:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: HDFS-12070.0.patch, HDFS-12070.1.patch, lease.patch
>
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13214) RBF: Complete document of Router configuration

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13214:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Complete document of Router configuration
> --
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13389) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427512#comment-16427512
 ] 

genericqa commented on HDFS-13389:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
44s{color} | {color:red} server in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} server in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} server in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  

[jira] [Updated] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13052:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: snapshot, webhdfs
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch, 
> HDFS-13052.006.patch, HDFS-13052.007.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11394) Support for getting erasure coding policy through WebHDFS#FileStatus

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-11394:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Support for getting erasure coding policy through WebHDFS#FileStatus
> 
>
> Key: HDFS-11394
> URL: https://issues.apache.org/jira/browse/HDFS-11394
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, namenode
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Major
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-11394.005.patch, HDFS-11394.006.patch, 
> HDFS-11394.01.patch, HDFS-11394.02.patch, HDFS-11394.03.patch, 
> HDFS-11394.04.patch
>
>
> We can expose erasure coding policy by erasure coded directory through 
> WebHDFS method as well as storage policy. This information can be used by 
> NameNode Web UI and show the detail of erasure coded directories.
> see: HDFS-8196



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13081) Datanode#checkSecureConfig should allow SASL and privileged HTTP

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13081:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Datanode#checkSecureConfig should allow SASL and privileged HTTP
> 
>
> Key: HDFS-13081
> URL: https://issues.apache.org/jira/browse/HDFS-13081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 3.0.0
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13081.000.patch, HDFS-13081.001.patch, 
> HDFS-13081.002.patch, HDFS-13081.003.patch, HDFS-13081.004.patch, 
> HDFS-13081.005.patch, HDFS-13081.006.patch
>
>
> Datanode#checkSecureConfig currently check the following to determine if 
> secure datanode is enabled. 
>  # The server has bound to privileged ports for RPC and HTTP via 
> SecureDataNodeStarter.
>  # The configuration enables SASL on DataTransferProtocol and HTTPS (no plain 
> HTTP) for the HTTP server. 
> Authentication of Datanode RPC server can be done either via SASL handshake 
> or JSVC/privilege RPC port. 
> This guarantees authentication of the datanode RPC server before a client 
> transmits a secret, such as a block access token. 
> Authentication of the  HTTP server can also be done either via HTTPS/SSL or 
> JSVC/privilege HTTP port. This guarantees authentication of datandoe HTTP 
> server before a client transmits a secret, such as a delegation token.
> This ticket is open to allow privileged HTTP as an alternative to HTTPS to 
> work with SASL based RPC protection.
>  
> cc: [~cnauroth] , [~daryn], [~jnpandey] for additional feedback.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12935:
-
Fix Version/s: (was: 3.0.2)

> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.9.0, 3.0.0-beta1, 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, 
> HDFS-12935.004.patch, HDFS-12935.005.patch, HDFS-12935.006-branch.2.patch, 
> HDFS-12935.006.patch, HDFS-12935.007-branch.2.patch, HDFS-12935.007.patch, 
> HDFS-12935.008.patch, HDFS-12935.009-branch-2.patch, 
> HDFS-12935.009-branch.2.patch, HDFS-12935.009.patch, HDFS_12935.001.patch
>
>
> In HA mode, if one namenode is down, most of functions can still work. When 
> considering the following two occasions:
>  (1)nn1 up and nn2 down
>  (2)nn1 down and nn2 up
> These two occasions should be equivalent. However, some of the DFSAdmin 
> commands will have ambiguous results. The commands can be send successfully 
> to the up namenode and are always functionally useful only when nn1 is up 
> regardless of exception (IOException when connecting to the down namenode 
> nn2). If only nn2 is up, the commands have no use at all and only exception 
> to connect nn1 can be found.
> See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to 
> set balancer bandwidth value for datanodes as an example. It works and all 
> the datanodes can get the setting values only when nn1 is up. If only nn2 is 
> up, the command throws exception directly and no datanode get the bandwidth 
> setting. Approximately ten DFSAdmin commands use the similar logical process 
> and may be ambiguous.
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345
> *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820*
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei02:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei01:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)

2018-04-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427502#comment-16427502
 ] 

Anu Engineer commented on HDFS-10419:
-

FYI, we have renamed HDSL to HDDS via HDFS-13405. Thank you for the comments.

> Building HDFS on top of new storage layer (HDSL)
> 
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
> Attachments: Evolving NN using new block-container layer.pdf
>
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea is:
> # Each block can be treated as an object and the block ID is the object's key.
> # Blocks will still be stored in DataNodes but as objects in storage 
> containers.
> # The block management work can be separated out of the NameNode and will be 
> handled by the storage container layer in a more distributed way. The 
> NameNode will only manage the namespace (i.e., files and directories).
> # For each file, the NameNode only needs to record a list of block IDs which 
> are used as keys to obtain real data from storage containers.
> # A new DFSClient implementation talks to both NameNode and the storage 
> container layer to read/write.
> HDFS, especially the NameNode, can get much better scalability from this 
> design. Currently the NameNode's heaviest workload comes from the block 
> management, which includes maintaining the block-DataNode mapping, receiving 
> full/incremental block reports, tracking block states (under/over/miss 
> replicated), and joining every writing pipeline protocol to guarantee the 
> data consistency. These work bring high memory footprint and make NameNode 
> suffer from GC. HDFS-5477 already proposes to convert BlockManager as a 
> service. If we can build HDFS on top of the storage container layer, we not 
> only separate out the BlockManager from the NameNode, but also replace it 
> with a new distributed management scheme.
> The storage container work is currently in progress in HDFS-7240, and the 
> work proposed here is still in an experimental/exploring stage. We can do 
> this experiment in a feature branch so that people with interests can be 
> involved.
> A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427501#comment-16427501
 ] 

Anu Engineer commented on HDFS-13405:
-

[~ajayydv] Thanks for the update, I have committed the patch to the feature 
branch.

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13405-HDFS-7240.001.patch, 
> HDFS-13405-HDFS-7240.addendum.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13241:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13241.001.patch, HDFS-13241.002.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13226) RBF: Throw the exception if mount table entry validated failed

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13226:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Throw the exception if mount table entry validated failed
> --
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, 
> HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, 
> HDFS-13226.006.patch, HDFS-13226.007.patch, HDFS-13226.008.patch, 
> HDFS-13226.009.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12512:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch, HDFS-12512.007.patch, 
> HDFS-12512.008.patch, HDFS-12512.009.patch, HDFS-12512.010.patch, 
> HDFS-12512.011.patch, HDFS-12512.012.patch, HDFS-12512.013.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12156) TestFSImage fails without -Pnative

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12156:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> TestFSImage fails without -Pnative
> --
>
> Key: HDFS-12156
> URL: https://issues.apache.org/jira/browse/HDFS-12156
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch
>
>
> TestFSImage#testCompression tests LZ4 codec and it fails when native library 
> is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13405:
--
Attachment: HDFS-13405-HDFS-7240.addendum.patch

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13405-HDFS-7240.001.patch, 
> HDFS-13405-HDFS-7240.addendum.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-13399:

Attachment: HDFS-13399-HDFS-12943.000.patch

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-05 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427453#comment-16427453
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Attaching a preliminary patch.
Notable changes:

(1) In order to support {{AlignmentContext}} across HA I had to modify 
{{HAProxyFactory}} interface. Passed the {{AlignmentContext}} across both 
{{createProxy}} methods. This resulted in modifying {{ClientHAProxyFactory}} 
and {{NameNodeHAProxyFactory}}.
(2) {{RpcEngine}} interface changed in order to pass {{AlignmentContext}} 
through it's main {{getProxy}} call. 
(3) Heavy use of method overloading was done to attempt to minimize changes. In 
most cases the "old" methods now just pass {{null}} for their alignmentContext 
parameter.
(4) Finally, {{Call}} was modified to have an overloaded method for 
construction and and {{Client.receiveRpcResponse}} and 
{{Client.sendRpcRequest}} make use of {{Call.alignmentContext}}, preserving the 
mapping of DFSClient to their respective AlignmentContext per call. 
(5) TestStateAlignmentContext was modified to use a new DFSClient that had a 
Mockito spied AlignmentContext passed into it's ClientProtocol proxy.

The patch is pretty sizable but I believe the changes are overall fairly 
trivial.
Looking forward to your review(s).


> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13400) WebHDFS append returned stream has incorrectly set position

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427442#comment-16427442
 ] 

genericqa commented on HDFS-13400:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 47 unchanged - 0 fixed = 48 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13400 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917626/HDFS-13400-unit-test.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 04486598d732 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23796/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23796/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427424#comment-16427424
 ] 

genericqa commented on HDFS-13353:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 16s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
54s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13353 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917730/HDFS-13353.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 4433b8957b9b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427421#comment-16427421
 ] 

Ajay Kumar commented on HDFS-13402:
---

space missing at L44 C27. Wondering if we can improve second line further. (i.e 
"The most common uses HDFS as a backend.") Difficult for someone new to 
understand.

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Attachments: HDFS-13402.001.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver}implementation based on a filesystem. The most 
> common uses
>  * HDFS as a backend.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13405:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~msingh] Thanks for the review, I have committed this patch to the feature 
branch.

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13405-HDFS-7240.001.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427376#comment-16427376
 ] 

Anu Engineer commented on HDFS-13405:
-

[~msingh] Thanks for the review, I will commit this shortly.

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-13405-HDFS-7240.001.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427373#comment-16427373
 ] 

Mukul Kumar Singh commented on HDFS-13405:
--

Thanks for working on this [~anu]. I have tested this locally and the new 
refactored code is working correctly.
+1, the patch looks good to me.

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-13405-HDFS-7240.001.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13405:

Attachment: HDFS-13405-HDFS-7240.001.patch

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-13405-HDFS-7240.001.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13405:

Status: Patch Available  (was: Open)

> Ozone: Rename HDSL to HDDS
> --
>
> Key: HDFS-13405
> URL: https://issues.apache.org/jira/browse/HDFS-13405
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-13405-HDFS-7240.001.patch
>
>
> During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
> to a better name. The community consensus is to name the block layer as 
> Hadoop Distributed Data Store or HDDS. This jira addresses that renaming. 
> This JIRA also converts the ozone's maven build profile from *-Phdsl* to 
> *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13405) Ozone: Rename HDSL to HDDS

2018-04-05 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-13405:
---

 Summary: Ozone: Rename HDSL to HDDS
 Key: HDFS-13405
 URL: https://issues.apache.org/jira/browse/HDFS-13405
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer


During the Ozone merge vote and in HDFS-10419 Jira we discussed renaming HDSL 
to a better name. The community consensus is to name the block layer as Hadoop 
Distributed Data Store or HDDS. This jira addresses that renaming. This JIRA 
also converts the ozone's maven build profile from *-Phdsl* to *-Phdds*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13292) Crypto command should give proper exception when key is already exist for zone directory

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427335#comment-16427335
 ] 

genericqa commented on HDFS-13292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917713/HDFS-13292.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux da469e3fff7e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23794/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23794/testReport/ |
| Max. process+thread count | 3144 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to test all EC policies

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12587:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to test all EC policies
> 
>
> Key: HDFS-12587
> URL: https://issues.apache.org/jira/browse/HDFS-12587
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-12587.1.patch, HDFS-12587.2.patch
>
>
> This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
> {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
> policies with Parameterized tests in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12505) Extend TestFileStatusWithECPolicy with a random EC policy

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12505:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> Extend TestFileStatusWithECPolicy with a random EC policy
> -
>
> Key: HDFS-12505
> URL: https://issues.apache.org/jira/browse/HDFS-12505
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-12505.1.patch, HDFS-12505.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427305#comment-16427305
 ] 

genericqa commented on HDFS-13237:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917732/HDFS-13237.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 8c6f29169840 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23797/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal

2018-04-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427301#comment-16427301
 ] 

Ajay Kumar commented on HDFS-13279:
---

[~Tao Jie], Thanks for updating the patch. Few comments 
 * We should try to avoid any change in default behaviour. 
 ** Don't remove the default impl entry from core-default.xml.
 ** Similarly we can do this with almost no change in  Network Topology, 
DFSNetworkTopology and TestBalancerWithNodeGroup. We can  update 
DatanodeManager#init to instantiate according to value of "net.topology.impl".
 * NetworkTopologyWithWeightedRack
 ** Shall we override chooseRandom & chooseRandomWithStorageType to  select 
next random node utilizing weights of rack. i.e first select rack according to 
weighted probability and then select a node in that rack.
 *** choose rack according to weighted probability and excluded nodes.
 *** choose random node from rack selected on first step.
 *** repeat until a node is found or all racks in scope are tried at least once.
 ** L29, shall we simplify javadoc for class to something like "The class 
extends NetworkTopology to represents racks weighted according to number of 
datanodes."
 ** L38 LimitedPrivate to Private
 * WeightedRackBlockPlacementPolicy
 ** L43, chooseDataNode: Instead of choosing datanode twice we can just call 
super.chooseRandom if we overide chooseRandom in 
NetworkTopologyWithWeightedRack. This way we can avoid calling chooseRandom 
twice.
 ** Private and Unstable annotation for class

 

 

> Datanodes usage is imbalanced if number of nodes per rack is not equal
> --
>
> Key: HDFS-13279
> URL: https://issues.apache.org/jira/browse/HDFS-13279
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13279.001.patch, HDFS-13279.002.patch, 
> HDFS-13279.003.patch, HDFS-13279.004.patch, HDFS-13279.005.patch
>
>
> In a Hadoop cluster, number of nodes on a rack could be different. For 
> example, we have 50 Datanodes in all and 15 datanodes per rack, it would 
> remain 5 nodes on the last rack. In this situation, we find that storage 
> usage on the last 5 nodes would be much higher than other nodes.
>  With the default blockplacement policy, for each block, the first 
> replication has the same probability to write to each datanode, but the 
> probability for the 2nd/3rd replication to write to the last 5 nodes would 
> much higher than to other nodes. 
>  Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block 
> would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep 
> is on rack1(15 reps) would send equally to other 35 nodes and each nodes 
> receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on 
> rack4(5 nodes) would receive 1.29 replications in all, while other node would 
> receive 0.97 reps.
> ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)||
> |From rack1|-|15/35=0.43|0.43|0.43|
> |From rack2|0.43|-|0.43|0.43|
> |From rack3|0.43|0.43|-|0.43|
> |From rack4|5/45=0.11|0.11|0.11|-|
> |Total|0.97|0.97|0.97|1.29|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427297#comment-16427297
 ] 

Hudson commented on HDFS-13350:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13928 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13928/])
HDFS-13350. Negative legacy block ID will confuse Erasure Coding to be (lei: 
rev d737bf99d44ce34cd01baad716d23df269267c95)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java


> Negative legacy block ID will confuse Erasure Coding to be considered as 
> striped block
> --
>
> Key: HDFS-13350
> URL: https://issues.apache.org/jira/browse/HDFS-13350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch, 
> HDFS-13350.02.patch
>
>
> HDFS-4645 has changed HDFS block ID from randomly generated to sequential 
> positive IDs.  And later on, HDFS EC was built on the assumption that normal 
> 3x replica block IDs are positive, so EC re-use negative IDs as striped 
> blocks.
> However, there are legacy block IDs can be negative in the system, we should 
> not use hardcode method to check whether a block is stripe or not:
> {code}
>   public static boolean isStripedBlockID(long id) {
> return BlockType.fromBlockId(id) == STRIPED;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13145:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> SBN crash when transition to ANN with in-progress edit tailing enabled
> --
>
> Key: HDFS-13145
> URL: https://issues.apache.org/jira/browse/HDFS-13145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch
>
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} 
> will send two batches to JNs, one normal edit batch followed by a dummy batch 
> to update the commit ID on JNs.
> {code}
>   QuorumCall qcall = loggers.sendEdits(
>   segmentTxId, firstTxToFlush,
>   numReadyTxns, data);
>   loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>   
>   // Since we successfully wrote this batch, let the loggers know. Any 
> future
>   // RPCs will thus let the loggers know of the most recent transaction, 
> even
>   // if a logger has fallen behind.
>   loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>   // If we don't have this dummy send, committed TxId might be one-batch
>   // stale on the Journal Nodes
>   if (updateCommittedTxId) {
> QuorumCall fakeCall = loggers.sendEdits(
> segmentTxId, firstTxToFlush,
> 0, new byte[0]);
> loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>   }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if 
> the ANN crashes in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 
> when there is a stream available for read: ..
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind 
> the {{endTxId}}, which caused the check in {{openForWrite}} to fail:
> {code}
> List streams = new ArrayList();
> journalSet.selectInputStreams(streams, segmentTxId, true, false);
> if (!streams.isEmpty()) {
>   String error = String.format("Cannot start writing at txid %s " +
> "when there is a stream available for read: %s",
> segmentTxId, streams.get(0));
>   IOUtils.cleanupWithLogger(LOG,
>   streams.toArray(new EditLogInputStream[0]));
>   throw new IllegalStateException(error);
> }
> {code}
> In our environment, this can be reproduced pretty consistently, which will 
> leave the cluster with no running namenodes. Even though we are using a 2.8.2 
> backport, I believe the same issue also exist in 3.0.x. 



--
This message 

  1   2   >