[jira] [Commented] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198655#comment-17198655
 ] 

Hadoop QA commented on HDFS-15588:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 74 unchanged - 0 fixed = 75 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 

[jira] [Updated] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sr2020 updated HDFS-15588:
--
Attachment: HDFS-15588-002.patch

> Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and 
> can cause a healthy datanode to be excluded
> 
>
> Key: HDFS-15588
> URL: https://issues.apache.org/jira/browse/HDFS-15588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client, security
>Reporter: sr2020
>Priority: Major
> Attachments: HDFS-15588-001.patch, HDFS-15588-002.patch
>
>
> *Problem*:
>  Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
> means the lifetime of a block token is very short, as a result some healthy 
> datanodes could be wrongly excluded by the client due to the 
> `InvalidBlockTokenException`.
> More specifically, in `nextBlockOutputStream`, the client tries to get the 
> `accessToken` from the namenode and use it to talk to datanode. And the 
> lifetime of `accessToken` could set to very small (like 1 min) by setting 
> `dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
> migration, temporary network issue, or a stop-the-world GC), the 
> `accessToken` could become expired when the client tries to use it to talk to 
> the datanode. If expired, `createBlockOutputStream` will return false (and 
> mask the `InvalidBlockTokenException`), so the client will think the datanode 
> is unhealthy, mark the it as "excluded" and will never read/write on it.
> Related code in `nextBlockOutputStream`:
> {code:java}
> // Connect to first DataNode in the list.
> success = createBlockOutputStream(nodes, nextStorageTypes, nextStorageIDs,
> 0L, false);
> if (!success) {
>   LOG.warn("Abandoning " + block);
>   dfsClient.namenode.abandonBlock(block.getCurrentBlock(),
>   stat.getFileId(), src, dfsClient.clientName);
>   block.setCurrentBlock(null);
>   final DatanodeInfo badNode = nodes[errorState.getBadNodeIndex()];
>   LOG.warn("Excluding datanode " + badNode);
>   excludedNodes.put(badNode, badNode);
> }
> {code}
>  
> *Proposed solution*:
>  A simple retry on the same datanode after catching 
> `InvalidBlockTokenException` can solve this problem (assuming the extreme 
> conditions won't happen often). Since currently the 
> `dfs.block.access.token.lifetime` can even accept values like 0, we can also 
> choose to prevent the users from setting `dfs.block.access.token.lifetime` to 
> a small value (e.g., we can enforce a minimum value of 5mins for this 
> parameter).
> We submit a patch for retrying after catching `InvalidBlockTokenException` in 
> `nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
> minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
> handle this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198555#comment-17198555
 ] 

Hadoop QA commented on HDFS-15566:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 347 unchanged - 0 fixed = 348 total (was 347) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 13s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  

[jira] [Commented] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198532#comment-17198532
 ] 

Hadoop QA commented on HDFS-15588:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
45s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 54s{color} 
| {color:red} hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs-client in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 2 new + 74 unchanged - 0 fixed = 76 total (was 74) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
37s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  

[jira] [Updated] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sr2020 updated HDFS-15588:
--
Description: 
*Problem*:
 Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.

Related code in `nextBlockOutputStream`:
{code:java}
// Connect to first DataNode in the list.
success = createBlockOutputStream(nodes, nextStorageTypes, nextStorageIDs,
0L, false);

if (!success) {
  LOG.warn("Abandoning " + block);
  dfsClient.namenode.abandonBlock(block.getCurrentBlock(),
  stat.getFileId(), src, dfsClient.clientName);
  block.setCurrentBlock(null);
  final DatanodeInfo badNode = nodes[errorState.getBadNodeIndex()];
  LOG.warn("Excluding datanode " + badNode);
  excludedNodes.put(badNode, badNode);
}
{code}
 

*Proposed solution*:
 A simple retry on the same datanode after catching 
`InvalidBlockTokenException` can solve this problem (assuming the extreme 
conditions won't happen often). Since currently the 
`dfs.block.access.token.lifetime` can even accept values like 0, we can also 
choose to prevent the users from setting `dfs.block.access.token.lifetime` to a 
small value (e.g., we can enforce a minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.

  was:
*Description*:
 Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.

Related code in `nextBlockOutputStream`:
{code:java}
// Connect to first DataNode in the list.
success = createBlockOutputStream(nodes, nextStorageTypes, nextStorageIDs,
0L, false);

if (!success) {
  LOG.warn("Abandoning " + block);
  dfsClient.namenode.abandonBlock(block.getCurrentBlock(),
  stat.getFileId(), src, dfsClient.clientName);
  block.setCurrentBlock(null);
  final DatanodeInfo badNode = nodes[errorState.getBadNodeIndex()];
  LOG.warn("Excluding datanode " + badNode);
  excludedNodes.put(badNode, badNode);
}
{code}
 

*Proposed solution*:
 A simple retry on the same datanode after catching 
`InvalidBlockTokenException` can solve this problem (assuming the extreme 
conditions won't happen often). Since currently the 
`dfs.block.access.token.lifetime` can even accept values like 0, we can also 
choose to prevent the users from setting `dfs.block.access.token.lifetime` to a 
small value (e.g., we can enforce a minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.


> Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and 
> can cause a healthy datanode to be excluded
> 
>
> Key: HDFS-15588
> URL: https://issues.apache.org/jira/browse/HDFS-15588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, 

[jira] [Updated] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sr2020 updated HDFS-15588:
--
Description: 
*Description*:
 Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.

Related code in `nextBlockOutputStream`:
{code:java}
// Connect to first DataNode in the list.
success = createBlockOutputStream(nodes, nextStorageTypes, nextStorageIDs,
0L, false);

if (!success) {
  LOG.warn("Abandoning " + block);
  dfsClient.namenode.abandonBlock(block.getCurrentBlock(),
  stat.getFileId(), src, dfsClient.clientName);
  block.setCurrentBlock(null);
  final DatanodeInfo badNode = nodes[errorState.getBadNodeIndex()];
  LOG.warn("Excluding datanode " + badNode);
  excludedNodes.put(badNode, badNode);
}
{code}
 

*Proposed solution*:
 A simple retry on the same datanode after catching 
`InvalidBlockTokenException` can solve this problem (assuming the extreme 
conditions won't happen often). Since currently the 
`dfs.block.access.token.lifetime` can even accept values like 0, we can also 
choose to prevent the users from setting `dfs.block.access.token.lifetime` to a 
small value (e.g., we can enforce a minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.

  was:
*Description*:
Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.


*Proposed solution*:
A simple retry on the same datanode after catching `InvalidBlockTokenException` 
can solve this problem (assuming the extreme conditions won't happen often). 
Since currently the `dfs.block.access.token.lifetime` can even accept values 
like 0, we can also choose to prevent the users from setting 
`dfs.block.access.token.lifetime` to a small value (e.g., we can enforce a 
minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.



> Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and 
> can cause a healthy datanode to be excluded
> 
>
> Key: HDFS-15588
> URL: https://issues.apache.org/jira/browse/HDFS-15588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client, security
>Reporter: sr2020
>Priority: Major
> Attachments: HDFS-15588-001.patch
>
>
> *Description*:
>  Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
> means the lifetime of a block token is very short, as a result some healthy 
> datanodes could be wrongly excluded by the client due to the 
> `InvalidBlockTokenException`.
> More specifically, in `nextBlockOutputStream`, the client tries to get the 
> `accessToken` from the namenode and use it to talk to datanode. 

[jira] [Updated] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sr2020 updated HDFS-15588:
--
Attachment: HDFS-15588-001.patch
Status: Patch Available  (was: Open)

> Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and 
> can cause a healthy datanode to be excluded
> 
>
> Key: HDFS-15588
> URL: https://issues.apache.org/jira/browse/HDFS-15588
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client, security
>Reporter: sr2020
>Priority: Major
> Attachments: HDFS-15588-001.patch
>
>
> *Description*:
> Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
> means the lifetime of a block token is very short, as a result some healthy 
> datanodes could be wrongly excluded by the client due to the 
> `InvalidBlockTokenException`.
> More specifically, in `nextBlockOutputStream`, the client tries to get the 
> `accessToken` from the namenode and use it to talk to datanode. And the 
> lifetime of `accessToken` could set to very small (like 1 min) by setting 
> `dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
> migration, temporary network issue, or a stop-the-world GC), the 
> `accessToken` could become expired when the client tries to use it to talk to 
> the datanode. If expired, `createBlockOutputStream` will return false (and 
> mask the `InvalidBlockTokenException`), so the client will think the datanode 
> is unhealthy, mark the it as "excluded" and will never read/write on it.
> *Proposed solution*:
> A simple retry on the same datanode after catching 
> `InvalidBlockTokenException` can solve this problem (assuming the extreme 
> conditions won't happen often). Since currently the 
> `dfs.block.access.token.lifetime` can even accept values like 0, we can also 
> choose to prevent the users from setting `dfs.block.access.token.lifetime` to 
> a small value (e.g., we can enforce a minimum value of 5mins for this 
> parameter).
> We submit a patch for retrying after catching `InvalidBlockTokenException` in 
> `nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
> minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
> handle this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)
sr2020 created HDFS-15588:
-

 Summary: Arbitrarily low values for 
`dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode 
to be excluded
 Key: HDFS-15588
 URL: https://issues.apache.org/jira/browse/HDFS-15588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, hdfs-client, security
Reporter: sr2020


*Description*:
Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.


*Proposed solution*:
A simple retry on the same datanode after catching `InvalidBlockTokenException` 
can solve this problem (assuming the extreme conditions won't happen often). 
Since currently the `dfs.block.access.token.lifetime` can even accept values 
like 0, we can also choose to prevent the users from setting 
`dfs.block.access.token.lifetime` to a small value (e.g., we can enforce a 
minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198499#comment-17198499
 ] 

Íñigo Goiri commented on HDFS-15579:


BTW, there are a few checkstyle issues in  [^HDFS-15579-003.patch].

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, 
> HDFS-15579-003.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198497#comment-17198497
 ] 

Íñigo Goiri commented on HDFS-15579:


If it is easy, it is usually best to have unit tests.
Otherwise, this is indirectly is covered by other tests.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, 
> HDFS-15579-003.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-18 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-15566:

Attachment: HDFS-15566-003.patch

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch, HDFS-15566-002.patch, 
> HDFS-15566-003.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198435#comment-17198435
 ] 

Steve Loughran commented on HDFS-15587:
---

https://nvd.nist.gov/vuln/detail/CVE-2017-3166

> Hadoop Client version 3.2.1 vulnerability
> -
>
> Key: HDFS-15587
> URL: https://issues.apache.org/jira/browse/HDFS-15587
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: Laszlo Czol
>Priority: Minor
>
>  I'm having a problem using hadoop-client version 3.2.1 in my dependency 
> tree. It has a vulnerable jar: org.apache.hadoop : 
> hadoop-mapreduce-client-core : 3.2.1 The code for the vulnerability is: 
> CVE-2017-3166, basically _if a file in an encryption zone with access 
> permissions that make it world readable is localized via YARN's localization 
> mechanism, that file will be stored in a world-readable location and can be 
> shared freely with any application that requests to localize that file_ The 
> problem is that: if I'm updating for the 3.3.0 hadoop-client version the 
> vulnerability remains and I wouldn't make a downgrade for the version 2.8.1 
> which is the next non-vulnerable version.
> Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15587.
---
Resolution: Invalid

> Hadoop Client version 3.2.1 vulnerability
> -
>
> Key: HDFS-15587
> URL: https://issues.apache.org/jira/browse/HDFS-15587
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: Laszlo Czol
>Priority: Minor
>
>  I'm having a problem using hadoop-client version 3.2.1 in my dependency 
> tree. It has a vulnerable jar: org.apache.hadoop : 
> hadoop-mapreduce-client-core : 3.2.1 The code for the vulnerability is: 
> CVE-2017-3166, basically _if a file in an encryption zone with access 
> permissions that make it world readable is localized via YARN's localization 
> mechanism, that file will be stored in a world-readable location and can be 
> shared freely with any application that requests to localize that file_ The 
> problem is that: if I'm updating for the 3.3.0 hadoop-client version the 
> vulnerability remains and I wouldn't make a downgrade for the version 2.8.1 
> which is the next non-vulnerable version.
> Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486253
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 15:56
Start Date: 18/Sep/20 15:56
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312#issuecomment-694948683


   Thanks @ayushtkn  for the commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486253)
Time Spent: 1h 10m  (was: 1h)

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198406#comment-17198406
 ] 

Janus Chow edited comment on HDFS-15579 at 9/18/20, 3:33 PM:
-

There was no test for the class of PathLocation. Is it necessary to create a 
new test file for this method?


was (Author: symious):
There was no tests for the class of PathLocation. Is it necessary to create a 
new test file for this method?

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, 
> HDFS-15579-003.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198406#comment-17198406
 ] 

Janus Chow commented on HDFS-15579:
---

There was no tests for the class of PathLocation. Is it necessary to create a 
new test file for this method?

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, 
> HDFS-15579-003.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198386#comment-17198386
 ] 

Hadoop QA commented on HDFS-15579:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
14s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-18 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198379#comment-17198379
 ] 

jianghua zhu commented on HDFS-15516:
-

[~elgoiri] ,[~csun] ,[~ayushtkn] ,thank you very much for your suggestions and 
I will continue to work.
I will submit some changes later.

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15583) Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 to branch 3.2 and 3.1

2020-09-18 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198355#comment-17198355
 ] 

Stephen O'Donnell commented on HDFS-15583:
--

This patch also applies cleanly to branch-3.1, so the plan is to push it to 
branch-3.2 and branch-3.1

[~weichiu] could you give this backport a review please?

> Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 
> to branch 3.2 and 3.1
> --
>
> Key: HDFS-15583
> URL: https://issues.apache.org/jira/browse/HDFS-15583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0, 3.2.1
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15583.branch-3.2.001.patch
>
>
> HDFS-14476, HDFS-14751 and HDFS-15048 made some good improvements to the 
> datanode DirectoryScanner, but due to a large refactor on that class in 
> branch-3.3, they are not trivial to backport to earlier branches.
> HDFS-14476 introduced the problem in HDFS-14751 and a findbugs warning, fixed 
> in HDFS-15048, so these 3 need to be backported together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Laszlo Czol (Jira)
Laszlo Czol created HDFS-15587:
--

 Summary: Hadoop Client version 3.2.1 vulnerability
 Key: HDFS-15587
 URL: https://issues.apache.org/jira/browse/HDFS-15587
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: Laszlo Czol


 I'm having a problem using hadoop-client version 3.2.1 in my dependency tree. 
It has a vulnerable jar: org.apache.hadoop : hadoop-mapreduce-client-core : 
3.2.1 The code for the vulnerability is: CVE-2017-3166, basically _if a file in 
an encryption zone with access permissions that make it world readable is 
localized via YARN's localization mechanism, that file will be stored in a 
world-readable location and can be shared freely with any application that 
requests to localize that file_ The problem is that: if I'm updating for the 
3.3.0 hadoop-client version the vulnerability remains and I wouldn't make a 
downgrade for the version 2.8.1 which is the next non-vulnerable version.
Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15438) Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy

2020-09-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15438:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy
> --
>
> Key: HDFS-15438
> URL: https://issues.apache.org/jira/browse/HDFS-15438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15438.000.patch, HDFS-15438.001.patch, Screen Shot 
> 2020-09-03 at 4.33.53 PM.png
>
>
> In HDFS disk balancer, the config parameter 
> "dfs.disk.balancer.max.disk.errors" is to control the value of maximum number 
> of errors we can ignore for a specific move between two disks before it is 
> abandoned.
> The parameter can accept value that >= 0. And setting the value to 0 should 
> mean no error tolerance. However, setting the value to 0 will simply don't do 
> the block copy even there is no disk error occur because the while loop 
> condition *item.getErrorCount() < getMaxError(item)* will not satisfied.
> {code:java}
> // Gets the next block that we can copy
> private ExtendedBlock getBlockToCopy(FsVolumeSpi.BlockIterator iter,
>  DiskBalancerWorkItem item) {
>   while (!iter.atEnd() && item.getErrorCount() < getMaxError(item)) {
> try {
>   ... //get the block
> }  catch (IOException e) {
> item.incErrorCount();
> }
>if (item.getErrorCount() >= getMaxError(item)) {
> item.setErrMsg("Error count exceeded.");
> LOG.info("Maximum error count exceeded. Error count: {} Max error:{} 
> ",
> item.getErrorCount(), item.getMaxDiskErrors());
>   }
> {code}
> *How to fix*
> Change the while loop condition to support value 0.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15438) Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy

2020-09-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198305#comment-17198305
 ] 

Ayush Saxena commented on HDFS-15438:
-

Committed to trunk.

Thanx [~AMC-team]  for the contribution.

> Setting dfs.disk.balancer.max.disk.errors = 0 will fail the block copy
> --
>
> Key: HDFS-15438
> URL: https://issues.apache.org/jira/browse/HDFS-15438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Attachments: HDFS-15438.000.patch, HDFS-15438.001.patch, Screen Shot 
> 2020-09-03 at 4.33.53 PM.png
>
>
> In HDFS disk balancer, the config parameter 
> "dfs.disk.balancer.max.disk.errors" is to control the value of maximum number 
> of errors we can ignore for a specific move between two disks before it is 
> abandoned.
> The parameter can accept value that >= 0. And setting the value to 0 should 
> mean no error tolerance. However, setting the value to 0 will simply don't do 
> the block copy even there is no disk error occur because the while loop 
> condition *item.getErrorCount() < getMaxError(item)* will not satisfied.
> {code:java}
> // Gets the next block that we can copy
> private ExtendedBlock getBlockToCopy(FsVolumeSpi.BlockIterator iter,
>  DiskBalancerWorkItem item) {
>   while (!iter.atEnd() && item.getErrorCount() < getMaxError(item)) {
> try {
>   ... //get the block
> }  catch (IOException e) {
> item.incErrorCount();
> }
>if (item.getErrorCount() >= getMaxError(item)) {
> item.setErrMsg("Error count exceeded.");
> LOG.info("Maximum error count exceeded. Error count: {} Max error:{} 
> ",
> item.getErrorCount(), item.getMaxDiskErrors());
>   }
> {code}
> *How to fix*
> Change the while loop condition to support value 0.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198294#comment-17198294
 ] 

Hadoop QA commented on HDFS-15415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
48s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
13s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m  6s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestLocalDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/183/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011696/HDFS-15415.branch-3.3.001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux db908a700a82 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-15583) Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 to branch 3.2 and 3.1

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198277#comment-17198277
 ] 

Hadoop QA commented on HDFS-15583:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
13s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
42s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/182/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011699/HDFS-15583.branch-3.2.001.patch
 |
| Optional Tests | dupname asflicense xml compile javac javadoc mvninstall 
mvnsite 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198274#comment-17198274
 ] 

Ayush Saxena commented on HDFS-15516:
-

Thanx everyone, from the compatibility point of view, This is indeed 
incompatible, we can't change things in between, As [~csun]  mentioned this is 
going to break the parsers.

>From the compatibility guidelines :
{noformat}
Several components have audit logging systems that record system 
information in a machine readable format. Incompatible changes to that 
data format may break existing automation utilities. For the audit log, 
an incompatible change is any change that changes the format such that 
existing parsers no longer can parse the logs.

Policy
All audit log output SHALL be considered Public and Stable. Any change to the 
data format SHALL be considered an incompatible change.{noformat}

Though good to have, but I think we can't/shouldn't do this, 
{{Create}}/{{Append}}/{{Rename}} these are quite high frequency calls as well.

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janus Chow updated HDFS-15579:
--
Attachment: HDFS-15579-003.patch

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch, 
> HDFS-15579-003.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15582) Reduce NameNode audit log

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198267#comment-17198267
 ] 

Hadoop QA commented on HDFS-15582:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
14s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 622 unchanged - 0 fixed = 624 total (was 622) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 51s{color} 
| {color:red} hadoop-hdfs in the patch passed. 

[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198266#comment-17198266
 ] 

Janus Chow commented on HDFS-15579:
---

Thanks a lot, copied from v1.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198265#comment-17198265
 ] 

Ayush Saxena commented on HDFS-15579:
-

Misses one param, there are two arguments, you missed the {{PathLocation}} one :
{code:java}
+   * @param firstNsId Identifier of the namespace to place first.
+   */
+  public static PathLocation prioritizeDestination(PathLocation base, String 
firstNsId) {{code}

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-15585.
-
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3

Thanx [~umamaheswararao] for the contribution!!!

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486148
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 09:48
Start Date: 18/Sep/20 09:48
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486148)
Time Spent: 1h  (was: 50m)

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486145=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486145
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 09:34
Start Date: 18/Sep/20 09:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312#issuecomment-694764501


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 18s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 19s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 39s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  0s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   5m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m 32s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 58s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 113m  4s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 260m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2312/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2312 |
   | JIRA Issue | HDFS-15585 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0ae17e1e753e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eacbe07b565 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 

[jira] [Commented] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198241#comment-17198241
 ] 

Hadoop QA commented on HDFS-15585:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
29s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
0s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| 

[jira] [Commented] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198235#comment-17198235
 ] 

Hadoop QA commented on HDFS-15585:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
1s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
23s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| 

[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486124
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 09:10
Start Date: 18/Sep/20 09:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312#issuecomment-694752841


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 54s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  1s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 45s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 31s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   3m 15s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 48s |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  unit  |   0m 26s |  hadoop-hdfs in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 25s |  ASF License check generated no 
output?  |
   |  |   | 110m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2312/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2312 |
   | JIRA Issue | HDFS-15585 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5c04e8f0e001 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eacbe07b565 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2312/2/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2312/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | unit | 

[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=486123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486123
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 09:10
Start Date: 18/Sep/20 09:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694752616


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  40m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 33s |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 54s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   5m 21s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  1s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |   2m  1s |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 602 
unchanged - 0 fixed = 604 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m 43s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |   1m 43s |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new 
+ 586 unchanged - 0 fixed = 588 total (was 586)  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  23m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 175m 20s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 337m  3s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestQuota |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[jira] [Commented] (HDFS-15584) Improve HDFS large deletion cause namenode lockqueue boom and pending deletion boom.

2020-09-18 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198220#comment-17198220
 ] 

zhuqi commented on HDFS-15584:
--

Hi [~sodonnell] 

Yeah, i agree with you that we should sleep.

And the good default for "dfs.namenode.block.deletion.lock.time.threshold", we 
should test how much it should be. If the sleep time is 1ms, the 
"dfs.namenode.block.deletion.lock.time.threshold" may should be tens of ms, but 
i set the it 100ms for init.

Also i add the threshold is to avoid sleep when not heavy deletion, to reduce 
the time it holds the lock. You are right.

When pending deletion too many, the datanode will be heavy to deal with too 
many deletion. Also the speed namenode put into will be slower, which affect 
the performance.

Thanks for your quickly reply.

> Improve HDFS large deletion cause namenode lockqueue boom and pending 
> deletion boom.
> 
>
> Key: HDFS-15584
> URL: https://issues.apache.org/jira/browse/HDFS-15584
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15584.001.patch
>
>
> In our production cluster, the large deletion will boom the namenode lock 
> queue, also will lead to the boom of pending deletion in invalidate blocks.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15586) HBase NodeLabel support

2020-09-18 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198210#comment-17198210
 ] 

jianghua zhu commented on HDFS-15586:
-

Sorry, this issue should be placed under the HBase project.

> HBase NodeLabel support
> ---
>
> Key: HDFS-15586
> URL: https://issues.apache.org/jira/browse/HDFS-15586
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: jianghua zhu
>Priority: Major
>
> We can add a new feature. Similar to Yarn(NodeLabel).
> Here, the main purpose is to classify nodes with different resources (cpu, 
> memory, etc.) in the cluster so that they can be more efficient when 
> accessing HBase.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15584) Improve HDFS large deletion cause namenode lockqueue boom and pending deletion boom.

2020-09-18 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198204#comment-17198204
 ] 

Stephen O'Donnell commented on HDFS-15584:
--

Even though the loop in delete drops the lock and takes it back, we do still 
see large deletes slowing down the namenode badly. In other parts of the code, 
I see a similar pattern where the thread sleeps a short time before acquiring 
the lock again. A short sleep here is probably the best answer.

Before this patch, the thread processes 1000 blocks and then releases the lock. 
Do we have an idea of how long that takes? I would think much less than 100ms.

The timer you have added includes the lock wait time as well as the time to 
remove 1000 blocks. Is your idea that, if the lock is busy, the delete thread 
will take a long time to get the write lock, and if that is the case, it should 
sleep to reduce the time it holds the lock. However if the lock is not busy, it 
will not sleep at all?

The tricky part of this, is picking a good default for 
"dfs.namenode.block.deletion.lock.time.threshold". 

Once the delete completes and the blocks are on pendingDeletion - do you still 
see problems on the namenode, or does it work OK while Pending Deletion is 
processed?



> Improve HDFS large deletion cause namenode lockqueue boom and pending 
> deletion boom.
> 
>
> Key: HDFS-15584
> URL: https://issues.apache.org/jira/browse/HDFS-15584
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15584.001.patch
>
>
> In our production cluster, the large deletion will boom the namenode lock 
> queue, also will lead to the boom of pending deletion in invalidate blocks.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15584) Improve HDFS large deletion cause namenode lockqueue boom and pending deletion boom.

2020-09-18 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198198#comment-17198198
 ] 

zhuqi commented on HDFS-15584:
--

Hi [~LiJinglun], in our very busy cluster with thousands of nodes, very heavy 
deletion everyday causes the lock queue full for a couple of minutes. Also, 
when millions of blocks are put into the pending deletion queue, NameNode will 
suffer from a big performance drop. When the above situation happens, the 
original block increment solution also can not solve the problem in our 
cluster, so i add the patch to try to solve it.

Thanks.

> Improve HDFS large deletion cause namenode lockqueue boom and pending 
> deletion boom.
> 
>
> Key: HDFS-15584
> URL: https://issues.apache.org/jira/browse/HDFS-15584
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15584.001.patch
>
>
> In our production cluster, the large deletion will boom the namenode lock 
> queue, also will lead to the boom of pending deletion in invalidate blocks.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198152#comment-17198152
 ] 

Janus Chow edited comment on HDFS-15579 at 9/18/20, 8:01 AM:
-

-Checked the QA result was not be related to this patch.-

-Removed the patch and TestRpcRouter passed, will check the root cause.-

Interesting, I ran the same TestRouterRpc test several times, some succeeded, 
some failed. The result is the same with or without this patch.

So the test error should not be related to this patch.


was (Author: symious):
-Checked the QA result was not be related to this patch.-

Removed the patch and TestRpcRouter passed, will check the root cause.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15581) Access Controlled HTTPFS Proxy

2020-09-18 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198188#comment-17198188
 ] 

Stephen O'Donnell commented on HDFS-15581:
--

I think I must have clicked "assign" on this by mistake - I don't remember 
doing it and I have no background into this issue. From the history it does 
look like I assigned it to myself! Assigning it back to Richard - sorry about 
that.

> Access Controlled HTTPFS Proxy
> --
>
> Key: HDFS-15581
> URL: https://issues.apache.org/jira/browse/HDFS-15581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Richard
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HADOOP-17244.001.patch
>
>
> There are certain data migration patterns that require a way to limit access 
> to the HDFS via the HTTPFS proxy.  The needed access modes are read-write, 
> read-only and write-only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15581) Access Controlled HTTPFS Proxy

2020-09-18 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reassigned HDFS-15581:


Assignee: Richard  (was: Stephen O'Donnell)

> Access Controlled HTTPFS Proxy
> --
>
> Key: HDFS-15581
> URL: https://issues.apache.org/jira/browse/HDFS-15581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Richard
>Assignee: Richard
>Priority: Minor
> Attachments: HADOOP-17244.001.patch
>
>
> There are certain data migration patterns that require a way to limit access 
> to the HDFS via the HTTPFS proxy.  The needed access modes are read-write, 
> read-only and write-only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15586) HBase NodeLabel support

2020-09-18 Thread jianghua zhu (Jira)
jianghua zhu created HDFS-15586:
---

 Summary: HBase NodeLabel support
 Key: HDFS-15586
 URL: https://issues.apache.org/jira/browse/HDFS-15586
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: jianghua zhu


We can add a new feature. Similar to Yarn(NodeLabel).
Here, the main purpose is to classify nodes with different resources (cpu, 
memory, etc.) in the cluster so that they can be more efficient when accessing 
HBase.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486088=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486088
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 07:19
Start Date: 18/Sep/20 07:19
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2312:
URL: https://github.com/apache/hadoop/pull/2312#discussion_r490750642



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
##
@@ -67,4 +70,23 @@ public void testOpenWithPathHandle() throws Exception {
   }
 }
   }
+
+  @Override
+  public void testEmptyDelegationToken() throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();

Review comment:
   Done. Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486088)
Time Spent: 0.5h  (was: 20m)

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?focusedWorklogId=486087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486087
 ]

ASF GitHub Bot logged work on HDFS-15585:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 07:08
Start Date: 18/Sep/20 07:08
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312#discussion_r490745131



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
##
@@ -67,4 +70,23 @@ public void testOpenWithPathHandle() throws Exception {
   }
 }
   }
+
+  @Override
+  public void testEmptyDelegationToken() throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();

Review comment:
   nit:
   you aren't writing any data, the numDatanodes can be set to 0.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486087)
Time Spent: 20m  (was: 10m)

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15582) Reduce NameNode audit log

2020-09-18 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198171#comment-17198171
 ] 

Jinglun commented on HDFS-15582:


Re-upload patch to trigger jenkins.

> Reduce NameNode audit log
> -
>
> Key: HDFS-15582
> URL: https://issues.apache.org/jira/browse/HDFS-15582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15582.001.patch
>
>
> Reduce the empty fields in audit log. Add a switch to skip all the empty 
> fields.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15582) Reduce NameNode audit log

2020-09-18 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15582:
---
Attachment: HDFS-15582.001.patch

> Reduce NameNode audit log
> -
>
> Key: HDFS-15582
> URL: https://issues.apache.org/jira/browse/HDFS-15582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15582.001.patch
>
>
> Reduce the empty fields in audit log. Add a switch to skip all the empty 
> fields.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15582) Reduce NameNode audit log

2020-09-18 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15582:
---
Attachment: (was: HDFS-15582.001.patch)

> Reduce NameNode audit log
> -
>
> Key: HDFS-15582
> URL: https://issues.apache.org/jira/browse/HDFS-15582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15582.001.patch
>
>
> Reduce the empty fields in audit log. Add a switch to skip all the empty 
> fields.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198152#comment-17198152
 ] 

Janus Chow edited comment on HDFS-15579 at 9/18/20, 7:03 AM:
-

-Checked the QA result was not be related to this patch.-

Removed the patch and TestRpcRouter passed, will check the root cause.


was (Author: symious):
Checked the QA result was not be related to this patch.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=486082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486082
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 18/Sep/20 06:50
Start Date: 18/Sep/20 06:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694690394


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |   1m 10s |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 602 
unchanged - 0 fixed = 604 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |   1m  4s |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new 
+ 586 unchanged - 0 fixed = 588 total (was 586)  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 110m 40s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 199m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2288 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e52145dca4b9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eacbe07b565 |
   | Default Java | Private 

[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-18 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198152#comment-17198152
 ] 

Janus Chow commented on HDFS-15579:
---

Checked the QA result was not be related to this patch.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Minor
> Attachments: HDFS-15579-001.patch, HDFS-15579-002.patch
>
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15584) Improve HDFS large deletion cause namenode lockqueue boom and pending deletion boom.

2020-09-18 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17198147#comment-17198147
 ] 

Jinglun commented on HDFS-15584:


Hi [~zhuqi], thanks your report. Could you make a more detailed description of 
the boom? We can see if we have alternative solution. I'm a little concerned 
about sleeping in the Handler thread. Mostly the handlers are very busy and 
shouldn't be hung.

> Improve HDFS large deletion cause namenode lockqueue boom and pending 
> deletion boom.
> 
>
> Key: HDFS-15584
> URL: https://issues.apache.org/jira/browse/HDFS-15584
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15584.001.patch
>
>
> In our production cluster, the large deletion will boom the namenode lock 
> queue, also will lead to the boom of pending deletion in invalidate blocks.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org