[jira] [Updated] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14631:
---
Attachment: (was: HDFS-14631.004.patch)

> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14631:
---
Attachment: HDFS-14631.004.patch

> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch, HDFS-14631.004.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1849) Implement S3 Complete MPU request to use Cache and DoubleBuffer

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1849?focusedWorklogId=285517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285517
 ]

ASF GitHub Bot logged work on HDDS-1849:


Author: ASF GitHub Bot
Created on: 31/Jul/19 05:48
Start Date: 31/Jul/19 05:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1181: HDDS-1849. 
Implement S3 Complete MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1181#issuecomment-516705831
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285517)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement S3 Complete MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1849
> URL: https://issues.apache.org/jira/browse/HDDS-1849
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement S3 Complete MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8708) DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies

2019-07-30 Thread Chengbing Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated HDFS-8708:

Target Version/s:   (was: 2.8.0)

> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies
> --
>
> Key: HDFS-8708
> URL: https://issues.apache.org/jira/browse/HDFS-8708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Jitendra Nath Pandey
>Assignee: Chengbing Liu
>Priority: Critical
> Attachments: HDFS-8708.001.patch
>
>
> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to 
> ensure fast failover. Otherwise, dfsclient retries the NN which is no longer 
> active and delays the failover.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285516=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285516
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 05:39
Start Date: 31/Jul/19 05:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285516)
Time Spent: 4h 40m  (was: 4.5h)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1856:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285515=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285515
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 05:38
Start Date: 31/Jul/19 05:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1174: HDDS-1856. 
Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516703796
 
 
   Opened a jira HDDS-1872 for failure related to 
TestS3MultipartUploadAbortResponse.
   Rest of the test failures are not related to this patch.
   Thank You @arp7 for the review. I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285515)
Time Spent: 4.5h  (was: 4h 20m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-30 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896796#comment-16896796
 ] 

Doroszlai, Attila commented on HDDS-1834:
-

Thanks [~xyao] for committing it.  Can you please double-check 
[ozone-0.4.1|https://github.com/apache/hadoop/commits/ozone-0.4.1]?  I don't 
see the commit there.

> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8708) DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies

2019-07-30 Thread Chengbing Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated HDFS-8708:

 Assignee: Chengbing Liu  (was: Brahma Reddy Battula)
Affects Version/s: 3.2.0
   3.1.2
   Status: Patch Available  (was: Reopened)

> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies
> --
>
> Key: HDFS-8708
> URL: https://issues.apache.org/jira/browse/HDFS-8708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.2.0
>Reporter: Jitendra Nath Pandey
>Assignee: Chengbing Liu
>Priority: Critical
> Attachments: HDFS-8708.001.patch
>
>
> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to 
> ensure fast failover. Otherwise, dfsclient retries the NN which is no longer 
> active and delays the failover.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8708) DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies

2019-07-30 Thread Chengbing Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated HDFS-8708:

Attachment: HDFS-8708.001.patch

> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies
> --
>
> Key: HDFS-8708
> URL: https://issues.apache.org/jira/browse/HDFS-8708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-8708.001.patch
>
>
> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to 
> ensure fast failover. Otherwise, dfsclient retries the NN which is no longer 
> active and delays the failover.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14661) RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not exist

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896792#comment-16896792
 ] 

Hadoop QA commented on HDFS-14661:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 18s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976289/HDFS-14661-trunk-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ce8da86df59 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0f2dad6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27346/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27346/testReport/ |
| Max. process+thread count | 1583 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-14645) ViewFileSystem should close the child FileSystems in close()

2019-07-30 Thread Jihyun Cho (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896791#comment-16896791
 ] 

Jihyun Cho commented on HDFS-14645:
---

Thanks for reviews. I fixed them.

> ViewFileSystem should close the child FileSystems in close()
> 
>
> Key: HDFS-14645
> URL: https://issues.apache.org/jira/browse/HDFS-14645
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.8, 3.3.0
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: HDFS-14645.001.patch, HDFS-14645.002.patch, 
> HDFS-14645.003.patch
>
>
> {{ViewFileSystem}} uses superclass's {{close}} in current implementation.
> It removes from {{FileSystem.CACHE}} without closing the child FileSystems.
> To close properly, when FileSystem is closing, its child FileSystems also 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285512=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285512
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 05:23
Start Date: 31/Jul/19 05:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516700673
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 717 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 889 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 413 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 603 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 364 | the patch passed |
   | +1 | javac | 364 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 350 | hadoop-hdds in the patch failed. |
   | -1 | unit | 3430 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 9290 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.container.common.transport.server.ratis.TestCSMMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1174 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab84367207b3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0f2dad6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/testReport/ |
   | Max. process+thread count | 3684 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[jira] [Commented] (HDFS-14524) NNTop total counts does not add up as expected

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896788#comment-16896788
 ] 

Hadoop QA commented on HDFS-14524:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 50 new + 463 unchanged - 1 fixed = 513 total (was 464) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 48 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.server.blockmanagement.TestSlowDiskTracker |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970383/HDFS-14524.001.patch |
| Optional Tests |  dupname  asflicense  compile  

[jira] [Commented] (HDFS-14684) Start the CLI MiniCluster failed because the default format option is false

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896786#comment-16896786
 ] 

Hadoop QA commented on HDFS-14684:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
14s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:e402791a51a |
| JIRA Issue | HDFS-14684 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970421/HADOOP-16337.branch-3.0.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 9200e78d6d81 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.0 / ec00431 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27343/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Start the CLI MiniCluster failed because the default format option is false
> ---
>
> Key: HDFS-14684
> URL: https://issues.apache.org/jira/browse/HDFS-14684
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.4, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HADOOP-16337.branch-3.0.001.patch, 
> HADOOP-16337.trunk.001.patch
>
>
> After HADOOP-14970, need to add option -format when start the CLI 
> MiniCluster. But the document about CLIMiniCluster didn't updated. Will get a 
> error when follow the document to start CLI MiniCluster.
> {code:java}
> 19/05/30 10:27:19 WARN common.Storage: Storage directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 does not exist
> 19/05/30 10:27:19 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 is in an inconsistent 
> state: storage directory does not exist or is not accessible.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
> at 

[jira] [Updated] (HDFS-14645) ViewFileSystem should close the child FileSystems in close()

2019-07-30 Thread Jihyun Cho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jihyun Cho updated HDFS-14645:
--
Attachment: HDFS-14645.003.patch

> ViewFileSystem should close the child FileSystems in close()
> 
>
> Key: HDFS-14645
> URL: https://issues.apache.org/jira/browse/HDFS-14645
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.8, 3.3.0
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: HDFS-14645.001.patch, HDFS-14645.002.patch, 
> HDFS-14645.003.patch
>
>
> {{ViewFileSystem}} uses superclass's {{close}} in current implementation.
> It removes from {{FileSystem.CACHE}} without closing the child FileSystems.
> To close properly, when FileSystem is closing, its child FileSystems also 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285508
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 04:53
Start Date: 31/Jul/19 04:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516694943
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 363 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 848 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 352 | the patch passed |
   | +1 | javac | 352 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 617 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 686 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 194 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1915 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7473 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1174 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4bbc9ab239dc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0f2dad6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/testReport/ |
   | Max. process+thread count | 3749 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285508)
Time Spent: 4h 10m  (was: 4h)

> Make changes 

[jira] [Commented] (HDFS-14080) DFS usage metrics reported in incorrect prefix

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896770#comment-16896770
 ] 

Hadoop QA commented on HDFS-14080:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14080 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948151/HDFS-14080.001.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 3c6c3a3e4420 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0f2dad6 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27344/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DFS usage metrics reported in incorrect prefix
> --
>
> Key: HDFS-14080
> URL: https://issues.apache.org/jira/browse/HDFS-14080
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Trivial
> Attachments: HDFS-14080.001.patch
>
>
> The NameNode webapp reports DFS usage metrics using standard SI prefixes (MB, 
> GB, etc.). The number reported in the UI is calculated to be the binary size 
> which should be noted using binary prefixes (MiB, GiB, etc.). The NameNode 
> webapp should be modified to use the correct binary prefixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14569) Result of crypto -listZones is not formatted properly

2019-07-30 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896768#comment-16896768
 ] 

hemanthboyina commented on HDFS-14569:
--

thanks [~jojochuang] 

> Result of crypto -listZones is not formatted properly
> -
>
> Key: HDFS-14569
> URL: https://issues.apache.org/jira/browse/HDFS-14569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14569.patch, image-2019-06-14-17-39-42-244.png
>
>
> hdfs crypto list zones displays zones and keys
> if the zone length + key length is greater than 80 characters 
> key will display only 4 characters in a row which is too small 
> !image-2019-06-14-17-39-42-244.png!
> the result is not formatted
> Increase the size for key



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14575) LeaseRenewer#daemon threads leak in DFSClient

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896764#comment-16896764
 ] 

Hadoop QA commented on HDFS-14575:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 23 unchanged - 0 fixed = 24 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 52s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestLeaseRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14575 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972036/HDFS-14575.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b5809aa23a93 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0f2dad6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27342/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27342/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27342/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 

[jira] [Updated] (HDDS-1875) Fix failures in TestS3MultipartUploadAbortResponse

2019-07-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1875:
-
Target Version/s: 0.5.0
  Status: Patch Available  (was: Open)

> Fix failures in TestS3MultipartUploadAbortResponse
> --
>
> Key: HDDS-1875
> URL: https://issues.apache.org/jira/browse/HDDS-1875
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net//job/ozone/17503//testReport/junit/org.apache.hadoop.ozone.om.response.s3.multipart/TestS3MultipartUploadAbortResponse/testAddDBToBatchWithParts/]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-07-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896762#comment-16896762
 ] 

Bharat Viswanadham commented on HDDS-1737:
--

Now for volume and bucket table, we have a full cache, if needed we can bring 
this change in.

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1875) Fix failures in TestS3MultipartUploadAbortResponse

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1875?focusedWorklogId=285489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285489
 ]

ASF GitHub Bot logged work on HDDS-1875:


Author: ASF GitHub Bot
Created on: 31/Jul/19 04:00
Start Date: 31/Jul/19 04:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1188: 
HDDS-1875. Fix failures in TestS3MultipartUploadAbortResponse.
URL: https://github.com/apache/hadoop/pull/1188
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285489)
Time Spent: 10m
Remaining Estimate: 0h

> Fix failures in TestS3MultipartUploadAbortResponse
> --
>
> Key: HDDS-1875
> URL: https://issues.apache.org/jira/browse/HDDS-1875
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net//job/ozone/17503//testReport/junit/org.apache.hadoop.ozone.om.response.s3.multipart/TestS3MultipartUploadAbortResponse/testAddDBToBatchWithParts/]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1875) Fix failures in TestS3MultipartUploadAbortResponse

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1875:
-
Labels: pull-request-available  (was: )

> Fix failures in TestS3MultipartUploadAbortResponse
> --
>
> Key: HDDS-1875
> URL: https://issues.apache.org/jira/browse/HDDS-1875
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> [https://ci.anzix.net//job/ozone/17503//testReport/junit/org.apache.hadoop.ozone.om.response.s3.multipart/TestS3MultipartUploadAbortResponse/testAddDBToBatchWithParts/]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1875) Fix failures in TestS3MultipartUploadAbortResponse

2019-07-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1875:


 Summary: Fix failures in TestS3MultipartUploadAbortResponse
 Key: HDDS-1875
 URL: https://issues.apache.org/jira/browse/HDDS-1875
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


[https://ci.anzix.net//job/ozone/17503//testReport/junit/org.apache.hadoop.ozone.om.response.s3.multipart/TestS3MultipartUploadAbortResponse/testAddDBToBatchWithParts/]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14679) failed to add erasure code policies with example template

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896756#comment-16896756
 ] 

Hadoop QA commented on HDFS-14679:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14679 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976284/HDFS-14679-02.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  |
| uname | Linux 7bff0e64e3e5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0f2dad6 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27341/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27341/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> 

[jira] [Commented] (HDFS-14290) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2019-07-30 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896754#comment-16896754
 ] 

Lisheng Sun commented on HDFS-14290:


Yeah. Sorry. It's not a problem. I have closed it. 

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-14290
> URL: https://issues.apache.org/jira/browse/HDFS-14290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14290.000.patch, webhdfs show.png
>
>
> The issue is there is no HttpRequestDecoder in InboundHandler of netty,  
> appear unexpected message type when read message.
>   
> !webhdfs show.png!   
> DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy 
> failed. Cause: 
>  com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: 
> java.lang.IllegalStateException: unexpected message type: 
> PooledUnsafeDirectByteBuf
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
>  at 
> org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
>  at 
> 

[jira] [Updated] (HDFS-14290) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2019-07-30 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14290:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-14290
> URL: https://issues.apache.org/jira/browse/HDFS-14290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14290.000.patch, webhdfs show.png
>
>
> The issue is there is no HttpRequestDecoder in InboundHandler of netty,  
> appear unexpected message type when read message.
>   
> !webhdfs show.png!   
> DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy 
> failed. Cause: 
>  com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: 
> java.lang.IllegalStateException: unexpected message type: 
> PooledUnsafeDirectByteBuf
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
>  at 
> org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
>  at 
> 

[jira] [Commented] (HDFS-14290) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896751#comment-16896751
 ] 

Wei-Chiu Chuang commented on HDFS-14290:


Looks like this is the same as HDFS-13899.  From the stack trace, this is 
probably something internal in Xiaomi's netty. Shall we resolve this as won't 
fix?

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-14290
> URL: https://issues.apache.org/jira/browse/HDFS-14290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14290.000.patch, webhdfs show.png
>
>
> The issue is there is no HttpRequestDecoder in InboundHandler of netty,  
> appear unexpected message type when read message.
>   
> !webhdfs show.png!   
> DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy 
> failed. Cause: 
>  com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: 
> java.lang.IllegalStateException: unexpected message type: 
> PooledUnsafeDirectByteBuf
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
>  at 
> org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
>  at 
> 

[jira] [Commented] (HDFS-14674) Got an unexpected txid when tail editlog

2019-07-30 Thread Wu Weiwei (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896750#comment-16896750
 ] 

Wu Weiwei commented on HDFS-14674:
--

[~wangzhaohui] Thanks for the patch. 

I configured the above configuration items on a busy ns standby, and the 
standby namenode crash as you said.

I think this configuration item will cause txid gap when reading multiple 
editsteam at the same time.

> Got an unexpected txid when tail editlog
> 
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14674-001.patch, image-2019-07-26-11-34-23-405.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //代码占位符
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //代码占位符[2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,

[jira] [Commented] (HDDS-153) Add HA-aware proxy for OM client

2019-07-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896747#comment-16896747
 ] 

Arpit Agarwal commented on HDDS-153:


This should be fixed already by a sub-task of HDDS-505. [~hanishakoneru] can 
you confirm and resolve?

> Add HA-aware proxy for OM client 
> -
>
> Key: HDDS-153
> URL: https://issues.apache.org/jira/browse/HDDS-153
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> This allows the client to talk to OMs in RATIS ring when failover (leader 
> change) happens. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-152) Support HA for Ozone Manager

2019-07-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-152.

Resolution: Duplicate

> Support HA for Ozone Manager
> 
>
> Key: HDDS-152
> URL: https://issues.apache.org/jira/browse/HDDS-152
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> Ozone Manager(OM) provide the name services on top of HDDS(SCM). This ticket 
> is opened to add HA support for OM. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-152) Support HA for Ozone Manager

2019-07-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDDS-152:


> Support HA for Ozone Manager
> 
>
> Key: HDDS-152
> URL: https://issues.apache.org/jira/browse/HDDS-152
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> Ozone Manager(OM) provide the name services on top of HDDS(SCM). This ticket 
> is opened to add HA support for OM. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10648) Expose Balancer metrics through Metrics2

2019-07-30 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang reassigned HDFS-10648:
-

Assignee: Chen Zhang

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, metrics
>Reporter: Mark Wagner
>Assignee: Chen Zhang
>Priority: Major
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-151) Add HA support for Ozone

2019-07-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-151.

Resolution: Duplicate

Resolving as a duplicate of HDDS-505. This was filed first, however OM HA 
development has been happening on HDDS-505 for a while now.

> Add HA support for Ozone
> 
>
> Key: HDDS-151
> URL: https://issues.apache.org/jira/browse/HDDS-151
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This includes HA for OM and SCM and their clients.  For OM and SCM, our 
> initial proposal is to use RATIS to ensure consistent/reliable replication of 
> metadata. We will post a design doc and create a separate branch for the 
> feature development.
> cc: [~anu], [~jnpandey], [~szetszwo], [~msingh], [~hellodengfei]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.

2019-07-30 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896742#comment-16896742
 ] 

Chen Zhang commented on HDFS-13783:
---

Thanks [~xkrogen] for your patient help. I've updated the document on 
HDFS-14662, do you have time to review that?

> Balancer: make balancer to be a long service process for easy to monitor it.
> 
>
> Key: HDFS-13783
> URL: https://issues.apache.org/jira/browse/HDFS-13783
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover
>Reporter: maobaolong
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13783-001.patch, HDFS-13783-002.patch, 
> HDFS-13783.003.patch, HDFS-13783.004.patch, HDFS-13783.005.patch, 
> HDFS-13783.006.patch
>
>
> If we have a long service process of balancer, like namenode, datanode, we 
> can get metrics of balancer, the metrics can tell us the status of balancer, 
> the amount of block it has moved, 
> We can get or set the balance plan by the balancer webUI. So many things we 
> can do if we have a long balancer service process.
> So, shall we start to plan the new Balancer? Hope this feature can enter the 
> next release of hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-867) Fix metric numKeys for overwrite scenario

2019-07-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896741#comment-16896741
 ] 

Arpit Agarwal commented on HDDS-867:


[~bharatviswa] is this issue still valid?

> Fix metric numKeys for overwrite scenario
> -
>
> Key: HDDS-867
> URL: https://issues.apache.org/jira/browse/HDDS-867
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Currently, we increment the key count for overwriting of an existing key.
> This Jira is to fix this issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14680) StorageInfoDefragmenter should handle exceptions gently

2019-07-30 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896739#comment-16896739
 ] 

Chen Zhang commented on HDFS-14680:
---

No I don't encounter this issue.

I'm working on HDFS-14657 and it relates with HDFS-9620, when I reading code of 
HDFS-9620, I found this design is too aggressive, StorageInfoDefragmenter 
should not shutdown NameNode on any exception, because it's not a critical 
thread, I'mean it should at least retry some times before shutdown NameNode, or 
maybe it can choose keep running no matter what exception happens, like 
HeartBeatManager.

We're upgrading our production cluster from 2.6 to 3.1, I don't want this 
happen to our NameNode, so it's just a proposal for discussion.

> StorageInfoDefragmenter should handle exceptions gently
> ---
>
> Key: HDFS-14680
> URL: https://issues.apache.org/jira/browse/HDFS-14680
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Priority: Major
>
> StorageInfoDefragmenter is responsible for FoldedTreeSet compaction, but it 
> terminates NameNode on any exception, is it too radical?
> I mean, even the critical threads like HeartbeatManager don't terminates 
> NameNode once they encounter exceptions, StorageInfoDefragmenter should not 
> do that either.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=285478=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285478
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 31/Jul/19 03:23
Start Date: 31/Jul/19 03:23
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1164: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#issuecomment-516678395
 
 
   New PR with checkstyle fix: https://github.com/apache/hadoop/pull/1187
   
   Pending CI.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285478)
Time Spent: 3h 40m  (was: 3.5h)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=285475=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285475
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 31/Jul/19 03:22
Start Date: 31/Jul/19 03:22
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1164: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#issuecomment-516678138
 
 
   @bharatviswa504 Sure.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285475)
Time Spent: 3h 20m  (was: 3h 10m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=285477=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285477
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 31/Jul/19 03:22
Start Date: 31/Jul/19 03:22
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1187: HDDS-1829 On 
OM reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285477)
Time Spent: 3.5h  (was: 3h 20m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-30 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896730#comment-16896730
 ] 

Chen Zhang commented on HDFS-14652:
---

Thanks [~jojochuang], I also don't know why these machines is initialized with 
net.ipv4.tcp_syn_retries=1, our company have hundreds of production services, 
different services has different requirements, so maybe it's just some mistake 
made by our DevOps, but it's absolutely not what we want. We've set this config 
to 6 on all hadoop machines.
{quote}Does it help to update ha.health-monitor.rpc-timeout.ms? This is by 
default 45 seconds. We found that bumping it to 90 or even 180 helps to work 
around certain long running HDFS RPCs.
{quote}
Yes, we've updated the ha.health-monitor.rpc-timeout.ms config, it's helpful. 
This Jira is just a proposal that health-monitor have a separate config key for 
rpc-timeout, then the retry times should also be configurable, not hard-coded 
to 1.If we don't want the health-monitor so sensitive, at least we can change 
it's behavior by changing this configuration.

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896727#comment-16896727
 ] 

Hadoop QA commented on HDFS-12914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_212. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_212. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 12 new + 379 unchanged - 0 fixed = 391 total (was 379) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-12914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976282/HDFS-12914.branch-2.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f60912f60537 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 77d1aa9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HDFS-14557) JournalNode error: Can't scan a pre-transactional edit log

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896723#comment-16896723
 ] 

Hadoop QA commented on HDFS-14557:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14557 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976274/HDFS-14557.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 02eec02dd0d5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0f2dad6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27337/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27337/testReport/ |
| Max. process+thread count | 4477 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Resolved] (HDFS-14542) Remove redundant code when verify quota

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun resolved HDFS-14542.

Resolution: Duplicate

Thanks [~jojochuang] for your reminding, I'll close this with Resolution: 
Duplicate.

> Remove redundant code when verify quota
> ---
>
> Key: HDFS-14542
> URL: https://issues.apache.org/jira/browse/HDFS-14542
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14542.patch
>
>
> DirectoryWithQuotaFeature.verifyQuotaByStorageType() does the job of 
> verifying quota. It's redundant to call isQuotaByStorageTypeSet() because the 
> for each iterator nextline has done the same job.
> {code:java}
> private void verifyQuotaByStorageType(EnumCounters typeDelta) 
>  throws QuotaByStorageTypeExceededException {
>   if (!isQuotaByStorageTypeSet()) { // REDUNDANT.
> return;
>   }
>   for (StorageType t: StorageType.getTypesSupportingQuota()) {
> if (!isQuotaByStorageTypeSet(t)) { // CHECK FOR EACH STORAGETYPE.
>   continue;
> }
> if (Quota.isViolated(quota.getTypeSpace(t), usage.getTypeSpace(t),
> typeDelta.get(t))) {
>   throw new QuotaByStorageTypeExceededException(
>   quota.getTypeSpace(t), usage.getTypeSpace(t) + typeDelta.get(t), t);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14631:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Thanks [~jojochuang] for your reminding, I'll close it with Resolution: 
Duplicate.

> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14631:
---
Comment: was deleted

(was: Thanks [~jojochuang] for your reminding, I'll close it with Resolution: 
Duplicate.)

> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.

2019-07-30 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun reopened HDFS-14631:


> The DirectoryScanner doesn't fix the wrongly placed replica.
> 
>
> Key: HDFS-14631
> URL: https://issues.apache.org/jira/browse/HDFS-14631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, 
> HDFS-14631.003.patch
>
>
> When DirectoryScanner scans block files, if the block refers to the block 
> file does not exist the DirectoryScanner will update the block based on the 
> replica file found on the disk. See FsDatasetImpl#checkAndUpdate.
>  
> {code:java}
> /*
> * Block exists in volumeMap and the block file exists on the disk
> */
> // Compare block files
> if (memBlockInfo.blockDataExists()) {
>   ...
> } else {
>   // Block refers to a block file that does not exist.
>   // Update the block with the file found on the disk. Since the block
>   // file and metadata file are found as a pair on the disk, update
>   // the block based on the metadata file found on the disk
>   LOG.warn("Block file in replica "
>   + memBlockInfo.getBlockURI()
>   + " does not exist. Updating it to the file found during scan "
>   + diskFile.getAbsolutePath());
>   memBlockInfo.updateWithReplica(
>   StorageLocation.parse(diskFile.toString()));
>   LOG.warn("Updating generation stamp for block " + blockId
>   + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS);
>   memBlockInfo.setGenerationStamp(diskGS);
> }
> {code}
> But the DirectoryScanner doesn't really fix it because in 
> LocalReplica#parseBaseDir() the 'subdir' are ignored.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14661) RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not exist

2019-07-30 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896720#comment-16896720
 ] 

xuzq commented on HDFS-14661:
-

Thanx [~ayushtkn] and I’m sorry for replying so late.  I have update the patch, 
please have a look.

> RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not 
> exist
> ---
>
> Key: HDFS-14661
> URL: https://issues.apache.org/jira/browse/HDFS-14661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.2
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14661-HDFS-13891-001.patch, 
> HDFS-14661-trunk-001.patch, HDFS-14661-trunk-002.patch, 
> HDFS-14661-trunk-003.patch
>
>
> The updateMountTableEntry shouldn't update the mountEntry if the targetPath 
> not exist.
> {code:java}
> @Override
> public UpdateMountTableEntryResponse updateMountTableEntry(
> UpdateMountTableEntryRequest request) throws IOException {
>   UpdateMountTableEntryResponse response =
>   getMountTableStore().updateMountTableEntry(request);
>   MountTable mountTable = request.getEntry();
>   if (mountTable != null && router.isQuotaEnabled()) {
> synchronizeQuota(mountTable.getSourcePath(),
> mountTable.getQuota().getQuota(),
> mountTable.getQuota().getSpaceQuota());
>   }
>   return response;
> }
> /**
>  * Synchronize the quota value across mount table and subclusters.
>  * @param path Source path in given mount table.
>  * @param nsQuota Name quota definition in given mount table.
>  * @param ssQuota Space quota definition in given mount table.
>  * @throws IOException
>  */
> private void synchronizeQuota(String path, long nsQuota, long ssQuota)
> throws IOException {
>   if (router.isQuotaEnabled() &&
>   (nsQuota != HdfsConstants.QUOTA_DONT_SET
>   || ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
> HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
> if (ret != null) {
>   this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
>   ssQuota, null);
> }
>   }
> }
> {code}
> As above, updateMountTableEntry update one mountEntry:
>  # update mountEntry in zookeeper
>  # synchronizeQuota(maybe throw some exception like "Directory does not 
> exist")
>  
> if  synchronizeQuota throw some exception, will return some exception to 
> dfsRouterAdmin, but the new mountEntry has been updated to zk.  
> It's clearly not what we would expect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14661) RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not exist

2019-07-30 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14661:

Attachment: HDFS-14661-trunk-003.patch

> RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not 
> exist
> ---
>
> Key: HDFS-14661
> URL: https://issues.apache.org/jira/browse/HDFS-14661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.2
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14661-HDFS-13891-001.patch, 
> HDFS-14661-trunk-001.patch, HDFS-14661-trunk-002.patch, 
> HDFS-14661-trunk-003.patch
>
>
> The updateMountTableEntry shouldn't update the mountEntry if the targetPath 
> not exist.
> {code:java}
> @Override
> public UpdateMountTableEntryResponse updateMountTableEntry(
> UpdateMountTableEntryRequest request) throws IOException {
>   UpdateMountTableEntryResponse response =
>   getMountTableStore().updateMountTableEntry(request);
>   MountTable mountTable = request.getEntry();
>   if (mountTable != null && router.isQuotaEnabled()) {
> synchronizeQuota(mountTable.getSourcePath(),
> mountTable.getQuota().getQuota(),
> mountTable.getQuota().getSpaceQuota());
>   }
>   return response;
> }
> /**
>  * Synchronize the quota value across mount table and subclusters.
>  * @param path Source path in given mount table.
>  * @param nsQuota Name quota definition in given mount table.
>  * @param ssQuota Space quota definition in given mount table.
>  * @throws IOException
>  */
> private void synchronizeQuota(String path, long nsQuota, long ssQuota)
> throws IOException {
>   if (router.isQuotaEnabled() &&
>   (nsQuota != HdfsConstants.QUOTA_DONT_SET
>   || ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
> HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
> if (ret != null) {
>   this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
>   ssQuota, null);
> }
>   }
> }
> {code}
> As above, updateMountTableEntry update one mountEntry:
>  # update mountEntry in zookeeper
>  # synchronizeQuota(maybe throw some exception like "Directory does not 
> exist")
>  
> if  synchronizeQuota throw some exception, will return some exception to 
> dfsRouterAdmin, but the new mountEntry has been updated to zk.  
> It's clearly not what we would expect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8708) DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies

2019-07-30 Thread Chengbing Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu reopened HDFS-8708:
-

I have different opinion so I'm reopening this issue.

In our production environment, we have both HA and non-HA clusters. A client 
should be able to access both kinds of clusters. This is our dilemma.

By setting dfs.client.retry.policy.enabled = true, currently we see:
1) HA nameservice: in case of nn1 shutdown, will still attempt connecting to 
nn1 many times (11min by default) before failover, which is undesired
2) non-HA namenode: keep retrying to connect for 11min by default

By setting dfs.client.retry.policy.enabled = false, currently we see:
1) HA nameservice: fast failover, everything works fine
2) non-HA namenode: no retry will be made in case of connection failure, which 
is undesired

We would like to ensure fast failover with HA mode as well as multiple retries 
with non-HA mode, and we cannot achieve this with current implementation.

Proposed code change:
In {{NameNodeProxiesClient.createProxyWithAlignmentContext}}, {{defaultPolicy}} 
should not be passed to {{ClientProtocol}} when {{withRetries}} is false (HA 
mode). Instead, TRY_ONCE_THEN_FAIL can be used to ensure fast failover.

> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies
> --
>
> Key: HDFS-8708
> URL: https://issues.apache.org/jira/browse/HDFS-8708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Brahma Reddy Battula
>Priority: Critical
>
> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to 
> ensure fast failover. Otherwise, dfsclient retries the NN which is no longer 
> active and delays the failover.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14645) ViewFileSystem should close the child FileSystems in close()

2019-07-30 Thread Jihyun Cho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jihyun Cho updated HDFS-14645:
--
Attachment: (was: HDFS-14645.003.patch)

> ViewFileSystem should close the child FileSystems in close()
> 
>
> Key: HDFS-14645
> URL: https://issues.apache.org/jira/browse/HDFS-14645
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.8, 3.3.0
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: HDFS-14645.001.patch, HDFS-14645.002.patch
>
>
> {{ViewFileSystem}} uses superclass's {{close}} in current implementation.
> It removes from {{FileSystem.CACHE}} without closing the child FileSystems.
> To close properly, when FileSystem is closing, its child FileSystems also 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14645) ViewFileSystem should close the child FileSystems in close()

2019-07-30 Thread Jihyun Cho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jihyun Cho updated HDFS-14645:
--
Attachment: HDFS-14645.003.patch

> ViewFileSystem should close the child FileSystems in close()
> 
>
> Key: HDFS-14645
> URL: https://issues.apache.org/jira/browse/HDFS-14645
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.8, 3.3.0
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Major
> Attachments: HDFS-14645.001.patch, HDFS-14645.002.patch, 
> HDFS-14645.003.patch
>
>
> {{ViewFileSystem}} uses superclass's {{close}} in current implementation.
> It removes from {{FileSystem.CACHE}} without closing the child FileSystems.
> To close properly, when FileSystem is closing, its child FileSystems also 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14080) DFS usage metrics reported in incorrect prefix

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14080:
--

Assignee: Greg Phillips

> DFS usage metrics reported in incorrect prefix
> --
>
> Key: HDFS-14080
> URL: https://issues.apache.org/jira/browse/HDFS-14080
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Trivial
> Attachments: HDFS-14080.001.patch
>
>
> The NameNode webapp reports DFS usage metrics using standard SI prefixes (MB, 
> GB, etc.). The number reported in the UI is calculated to be the binary size 
> which should be noted using binary prefixes (MiB, GiB, etc.). The NameNode 
> webapp should be modified to use the correct binary prefixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14117:
--

Assignee: venkata ramkumar

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117-HDFS-13891.001.patch, 
> HDFS-14117-HDFS-13891.002.patch, HDFS-14117-HDFS-13891.003.patch, 
> HDFS-14117-HDFS-13891.004.patch, HDFS-14117-HDFS-13891.005.patch, 
> HDFS-14117-HDFS-13891.006.patch, HDFS-14117-HDFS-13891.007.patch, 
> HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HDFS-13891.009.patch, 
> HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.011.patch, 
> HDFS-14117-HDFS-13891.012.patch, HDFS-14117-HDFS-13891.013.patch, 
> HDFS-14117-HDFS-13891.014.patch, HDFS-14117-HDFS-13891.015.patch, 
> HDFS-14117-HDFS-13891.016.patch, HDFS-14117-HDFS-13891.017.patch, 
> HDFS-14117-HDFS-13891.018.patch, HDFS-14117-HDFS-13891.019.patch, 
> HDFS-14117-HDFS-13891.020.patch, HDFS-14117.001.patch, HDFS-14117.002.patch, 
> HDFS-14117.003.patch, HDFS-14117.004.patch, HDFS-14117.005.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14684) Start the CLI MiniCluster failed because the default format option is false

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14684:
--

Assignee: Guanghao Zhang

> Start the CLI MiniCluster failed because the default format option is false
> ---
>
> Key: HDFS-14684
> URL: https://issues.apache.org/jira/browse/HDFS-14684
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.4, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HADOOP-16337.branch-3.0.001.patch, 
> HADOOP-16337.trunk.001.patch
>
>
> After HADOOP-14970, need to add option -format when start the CLI 
> MiniCluster. But the document about CLIMiniCluster didn't updated. Will get a 
> error when follow the document to start CLI MiniCluster.
> {code:java}
> 19/05/30 10:27:19 WARN common.Storage: Storage directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 does not exist
> 19/05/30 10:27:19 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 is in an inconsistent 
> state: storage directory does not exist or is not accessible.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1162)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1037)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:830)
> at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:154)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:316)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
> at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDFS-14684) Start the CLI MiniCluster failed because the default format option is false

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved HADOOP-16337 to HDFS-14684:
-

Affects Version/s: (was: 3.1.2)
   (was: 2.8.5)
   (was: 3.0.3)
   (was: 2.9.2)
   (was: 3.2.0)
   (was: 2.8.4)
   2.8.4
   3.2.0
   2.9.2
   3.0.3
   2.8.5
   3.1.2
   Issue Type: Bug  (was: Improvement)
  Key: HDFS-14684  (was: HADOOP-16337)
  Project: Hadoop HDFS  (was: Hadoop Common)

> Start the CLI MiniCluster failed because the default format option is false
> ---
>
> Key: HDFS-14684
> URL: https://issues.apache.org/jira/browse/HDFS-14684
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 2.8.5, 3.0.3, 2.9.2, 3.2.0, 2.8.4
>Reporter: Guanghao Zhang
>Priority: Minor
> Attachments: HADOOP-16337.branch-3.0.001.patch, 
> HADOOP-16337.trunk.001.patch
>
>
> After HADOOP-14970, need to add option -format when start the CLI 
> MiniCluster. But the document about CLIMiniCluster didn't updated. Will get a 
> error when follow the document to start CLI MiniCluster.
> {code:java}
> 19/05/30 10:27:19 WARN common.Storage: Storage directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 does not exist
> 19/05/30 10:27:19 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 is in an inconsistent 
> state: storage directory does not exist or is not accessible.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1162)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1037)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:830)
> at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:154)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:316)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
> at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14575) LeaseRenewer#daemon threads leak in DFSClient

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14575:
--

Assignee: Tao Yang

> LeaseRenewer#daemon threads leak in DFSClient
> -
>
> Key: HDFS-14575
> URL: https://issues.apache.org/jira/browse/HDFS-14575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: HDFS-14575.001.patch
>
>
> Currently LeaseRenewer (and its daemon thread) without clients should be 
> terminated after a grace period which defaults to 60 seconds. A race 
> condition may happen when a new request is coming just after LeaseRenewer 
> expired.
>  Reproduce this race condition:
>  # Client#1 creates File#1: creates LeaseRenewer#1 and starts Daemon#1 
> thread, after a few seconds, File#1 is closed , there is no clients in 
> LeaseRenewer#1 now.
>  # 60 seconds (grace period) later, LeaseRenewer#1 just expires but daemon#1 
> thread is still in sleep, Client#1 creates File#2, lead to the creation of 
> Daemon#2.
>  # Daemon#1 is awake then exit, after that, LeaseRenewer#1 is removed from 
> factory.
>  # File#2 is closed after a few seconds, LeaseRenewer#2 is created since it 
> can’t get renewer from factory.
> Daemon#2 thread leaks from now on, since Client#1 in it can never be removed 
> and it won't have a chance to stop.
> To solve this problem, IIUIC, a simple way I think is to make sure that all 
> clients are cleared when LeaseRenewer is removed from factory. Please feel 
> free to give your suggestions. Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14612) SlowDiskReport won't update when SlowDisks is always empty in heartbeat

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896703#comment-16896703
 ] 

Wei-Chiu Chuang commented on HDFS-14612:


[~arp] [~hanishakoneru] would you please help review this patch?

> SlowDiskReport won't update when SlowDisks is always empty in heartbeat
> ---
>
> Key: HDFS-14612
> URL: https://issues.apache.org/jira/browse/HDFS-14612
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: HaiBin Huang
>Assignee: HaiBin Huang
>Priority: Major
> Attachments: HDFS-14612-001.patch, HDFS-14612.patch
>
>
> I found SlowDiskReport won't update when slowDisks is always empty in 
> org.apache.hadoop.hdfs.server.blockmanagement.*handleHeartbeat*, this may 
> lead to outdated SlowDiskReport alway staying in jmx of namenode until next 
> time slowDisks isn't empty. So i think this method 
> *checkAndUpdateReportIfNecessary()* should be called firstly when we want to 
> get the jmx information about SlowDiskReport, this can keep the 
> SlowDiskReport on jmx is alway valid.
>  
> There is also some incorrect object reference on 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.
> *DataNodeVolumeMetrics*
> {code:java}
> // Based on writeIoRate
> public long getWriteIoSampleCount() {
>   return syncIoRate.lastStat().numSamples();
> }
> public double getWriteIoMean() {
>   return syncIoRate.lastStat().mean();
> }
> public double getWriteIoStdDev() {
>   return syncIoRate.lastStat().stddev();
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14612) SlowDiskReport won't update when SlowDisks is always empty in heartbeat

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14612:
--

Assignee: HaiBin Huang

> SlowDiskReport won't update when SlowDisks is always empty in heartbeat
> ---
>
> Key: HDFS-14612
> URL: https://issues.apache.org/jira/browse/HDFS-14612
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: HaiBin Huang
>Assignee: HaiBin Huang
>Priority: Major
> Attachments: HDFS-14612-001.patch, HDFS-14612.patch
>
>
> I found SlowDiskReport won't update when slowDisks is always empty in 
> org.apache.hadoop.hdfs.server.blockmanagement.*handleHeartbeat*, this may 
> lead to outdated SlowDiskReport alway staying in jmx of namenode until next 
> time slowDisks isn't empty. So i think this method 
> *checkAndUpdateReportIfNecessary()* should be called firstly when we want to 
> get the jmx information about SlowDiskReport, this can keep the 
> SlowDiskReport on jmx is alway valid.
>  
> There is also some incorrect object reference on 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.
> *DataNodeVolumeMetrics*
> {code:java}
> // Based on writeIoRate
> public long getWriteIoSampleCount() {
>   return syncIoRate.lastStat().numSamples();
> }
> public double getWriteIoMean() {
>   return syncIoRate.lastStat().mean();
> }
> public double getWriteIoStdDev() {
>   return syncIoRate.lastStat().stddev();
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14634) the original active namenode should have priority to participate in the election when the zookeeper recovery

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14634:
--

Assignee: liying

> the original active namenode should  have priority to participate in the 
> election when the zookeeper recovery
> -
>
> Key: HDFS-14634
> URL: https://issues.apache.org/jira/browse/HDFS-14634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 2.7.2
>Reporter: liying
>Assignee: liying
>Priority: Major
> Fix For: 2.7.2
>
> Attachments: HDFS-14634.001.patch
>
>
> Dynamically generates the namenode's election priorities in the zkfc Module。 
> For example,when the zookeeper crash,all of the namenode remain in their 
> original state。 Then the zookeeper service recovery,the original active 
> namenode should  have priority to participate in the election。
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-13901:
--

Assignee: Jinglun

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13657:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading 
> -
>
> Key: HDFS-13657
> URL: https://issues.apache.org/jira/browse/HDFS-13657
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.7
>Reporter: Wang XL
>Assignee: Wang XL
>Priority: Trivial
> Attachments: HDFS-13657-trunk.001.patch
>
>
> The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
> reserved for potential future usage, but code\{{public static final long 
> LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
> same time , operator '-' priority is higher than '<<', {{2 << 14 - 1}} is not 
> equal to {{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14683) WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response

2019-07-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14683:
--
Description: 
Quote [~jojochuang]'s 
[comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
{quote}
ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
but webhdfs GETCONTENTSUMMARY doesn't include that.
{quote}

Current response:

{code:json}
GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1

{
  "ContentSummary": {
"directoryCount": 15,
"fileCount": 1,
"length": 180838,
"quota": -1,
"spaceConsumed": 542514,
"spaceQuota": -1,
"typeQuota": {}
  }
}
{code}

  was:
Quote [~jojochuang]'s 
[comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
{quote}
ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
but webhdfs GETCONTENTSUMMARY doesn't include that.
{quote}

{code:json}
GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1

{
  "ContentSummary": {
"directoryCount": 15,
"fileCount": 1,
"length": 180838,
"quota": -1,
"spaceConsumed": 542514,
"spaceQuota": -1,
"typeQuota": {}
  }
}
{code}


> WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response
> 
>
> Key: HDFS-14683
> URL: https://issues.apache.org/jira/browse/HDFS-14683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Quote [~jojochuang]'s 
> [comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
> {quote}
> ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
> but webhdfs GETCONTENTSUMMARY doesn't include that.
> {quote}
> Current response:
> {code:json}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-07-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896699#comment-16896699
 ] 

Siyao Meng commented on HDFS-14034:
---

[~jojochuang] Filed HDFS-14683 to add erasureCodingPolicy field to 
GETCONTENTSUMMARY response.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14683) WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY result

2019-07-30 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-14683:
-

 Summary: WebHDFS: Add erasureCodingPolicy field to 
GETCONTENTSUMMARY result
 Key: HDFS-14683
 URL: https://issues.apache.org/jira/browse/HDFS-14683
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng


Quote [~jojochuang]'s 
[comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
{quote}
ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
but webhdfs GETCONTENTSUMMARY doesn't include that.
{quote}

{code:json}
GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1

{
  "ContentSummary": {
"directoryCount": 15,
"fileCount": 1,
"length": 180838,
"quota": -1,
"spaceConsumed": 542514,
"spaceQuota": -1,
"typeQuota": {}
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14683) WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response

2019-07-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14683:
--
Summary: WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY 
response  (was: WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY 
result)

> WebHDFS: Add erasureCodingPolicy field to GETCONTENTSUMMARY response
> 
>
> Key: HDFS-14683
> URL: https://issues.apache.org/jira/browse/HDFS-14683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Quote [~jojochuang]'s 
> [comment|https://issues.apache.org/jira/browse/HDFS-14034?focusedCommentId=16880062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16880062]:
> {quote}
> ContentSummary has a field erasureCodingPolicy which was added in HDFS-11647, 
> but webhdfs GETCONTENTSUMMARY doesn't include that.
> {quote}
> {code:json}
> GET /webhdfs/v1/tmp/?op=GETCONTENTSUMMARY HTTP/1.1
> {
>   "ContentSummary": {
> "directoryCount": 15,
> "fileCount": 1,
> "length": 180838,
> "quota": -1,
> "spaceConsumed": 542514,
> "spaceQuota": -1,
> "typeQuota": {}
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-13657:
--

Assignee: Wang XL

> INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading 
> -
>
> Key: HDFS-13657
> URL: https://issues.apache.org/jira/browse/HDFS-13657
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.7
>Reporter: Wang XL
>Assignee: Wang XL
>Priority: Trivial
> Attachments: HDFS-13657-trunk.001.patch
>
>
> The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
> reserved for potential future usage, but code\{{public static final long 
> LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
> same time , operator '-' priority is higher than '<<', {{2 << 14 - 1}} is not 
> equal to {{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-07-30 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896696#comment-16896696
 ] 

Lisheng Sun commented on HDFS-14313:


hi [~linyiqun]  Could you have time to continue review ? Thank you.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285446
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 01:55
Start Date: 31/Jul/19 01:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011529
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRequest.java
 ##
 @@ -82,6 +83,12 @@
   protected long scmBlockSize = 1000L;
   protected long dataSize;
 
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   fixed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285446)
Time Spent: 3h 50m  (was: 3h 40m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285447
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 01:55
Start Date: 31/Jul/19 01:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeRequest.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for Volume request.
+ */
+@SuppressWarnings("visibilitymodifier")
+public class TestOMVolumeRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   Fixed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285447)
Time Spent: 4h  (was: 3h 50m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285444
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 01:55
Start Date: 31/Jul/19 01:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011500
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/bucket/TestS3BucketRequest.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for S3 Bucket request.
+ */
+@SuppressWarnings("visibilityModifier")
+public class TestS3BucketRequest {
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   Yes, thanks for catching it. Fixed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285444)
Time Spent: 3h 40m  (was: 3.5h)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14569) Result of crypto -listZones is not formatted properly

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896674#comment-16896674
 ] 

Hudson commented on HDFS-14569:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17011 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17011/])
HDFS-14569. Result of crypto -listZones is not formatted properly. (weichiu: 
rev 0f2dad6679b7fc35474a3d33dc40b0db89bb1d80)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/TableListing.java


> Result of crypto -listZones is not formatted properly
> -
>
> Key: HDFS-14569
> URL: https://issues.apache.org/jira/browse/HDFS-14569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14569.patch, image-2019-06-14-17-39-42-244.png
>
>
> hdfs crypto list zones displays zones and keys
> if the zone length + key length is greater than 80 characters 
> key will display only 4 characters in a row which is too small 
> !image-2019-06-14-17-39-42-244.png!
> the result is not formatted
> Increase the size for key



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14419) Avoid repeated calls to the listOpenFiles function

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896673#comment-16896673
 ] 

Hudson commented on HDFS-14419:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17011 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17011/])
HDFS-14419. Avoid repeated calls to the listOpenFiles function. (weichiu: rev 
99f88c30cb5771579dd2b627673f05287b4fbe19)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java


> Avoid repeated calls to the listOpenFiles function
> --
>
> Key: HDFS-14419
> URL: https://issues.apache.org/jira/browse/HDFS-14419
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14419.001.patch, HDFS-14419.002.patch
>
>
> `hdfs dfsadmin -listOpenFiles -path /any/path` will request all opened files. 
> In the NameNode side, the function 
> LeaseManager.java#getUnderConstructionFiles will be called.
> When there are only N( the conditions, but the leaseManager contains M(>maxListOpenFilesResponses) 
> files, we will scan all leases. Finally, the hasMore will be set true and the 
> openFileEntries contains N files, the scenario will cause listOpenFiles() be 
> called again.
> If M is greater more than N, the two calls to getUnderConstructionFiles will 
> impact the NameNode performance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14661) RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not exist

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14661:
--

Assignee: xuzq

> RBF: updateMountTableEntry shouldn't update mountTableEntry if targetPath not 
> exist
> ---
>
> Key: HDFS-14661
> URL: https://issues.apache.org/jira/browse/HDFS-14661
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.2
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14661-HDFS-13891-001.patch, 
> HDFS-14661-trunk-001.patch, HDFS-14661-trunk-002.patch
>
>
> The updateMountTableEntry shouldn't update the mountEntry if the targetPath 
> not exist.
> {code:java}
> @Override
> public UpdateMountTableEntryResponse updateMountTableEntry(
> UpdateMountTableEntryRequest request) throws IOException {
>   UpdateMountTableEntryResponse response =
>   getMountTableStore().updateMountTableEntry(request);
>   MountTable mountTable = request.getEntry();
>   if (mountTable != null && router.isQuotaEnabled()) {
> synchronizeQuota(mountTable.getSourcePath(),
> mountTable.getQuota().getQuota(),
> mountTable.getQuota().getSpaceQuota());
>   }
>   return response;
> }
> /**
>  * Synchronize the quota value across mount table and subclusters.
>  * @param path Source path in given mount table.
>  * @param nsQuota Name quota definition in given mount table.
>  * @param ssQuota Space quota definition in given mount table.
>  * @throws IOException
>  */
> private void synchronizeQuota(String path, long nsQuota, long ssQuota)
> throws IOException {
>   if (router.isQuotaEnabled() &&
>   (nsQuota != HdfsConstants.QUOTA_DONT_SET
>   || ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
> HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
> if (ret != null) {
>   this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
>   ssQuota, null);
> }
>   }
> }
> {code}
> As above, updateMountTableEntry update one mountEntry:
>  # update mountEntry in zookeeper
>  # synchronizeQuota(maybe throw some exception like "Directory does not 
> exist")
>  
> if  synchronizeQuota throw some exception, will return some exception to 
> dfsRouterAdmin, but the new mountEntry has been updated to zk.  
> It's clearly not what we would expect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14147) Backport of HDFS-13056 to the 2.9 branch: "Expose file-level composite CRCs in HDFS"

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14147:
--

Assignee: Yan

> Backport of HDFS-13056 to the 2.9 branch: "Expose file-level composite CRCs 
> in HDFS"
> 
>
> Key: HDFS-14147
> URL: https://issues.apache.org/jira/browse/HDFS-14147
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, hdfs
>Affects Versions: 2.9.0, 2.9.1, 2.9.2
>Reporter: Yan
>Assignee: Yan
>Priority: Major
> Attachments: HDFS-14147-branch-2.9-001.patch, 
> HDFS-14147-branch-2.9-001.patch, HDFS-14147.pdf
>
>
> HDFS-13056, Expose file-level composite CRCs in HDFS which are comparable 
> across different instances/layouts, is a significant feature for storage 
> agnostic CRC comparisons between HDFS and cloud object stores such as S3 and 
> GCS. With the extensively installed base of Hadoop 2, it should make a lot of 
> sense to have the feature in Hadoop 2.
> The plan is to start with the backporting to 2.9, followed by 2.8 and 2.7 in 
> that order.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14669) TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails intermittently in trunk

2019-07-30 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896671#comment-16896671
 ] 

qiang Liu commented on HDFS-14669:
--

[~ayushtkn] could you please review the lattest patch 
[^HDFS-14669-trunk.003.patch]

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails 
> intermittently in trunk
> -
>
> Key: HDFS-14669
> URL: https://issues.apache.org/jira/browse/HDFS-14669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.2.0
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: scanner, test
> Attachments: HDFS-14669-trunk-001.patch, HDFS-14669-trunk.002.patch, 
> HDFS-14669-trunk.003.patch
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testDirectoryScannerInFederatedCluster
>  radomlly Failes because of write files of the same name, meaning intent to 
> write 2 files but  2 files are the same name, witch cause a race condition of 
> datanode delete block and the scan action count block.
>  
> Ref :: 
> [https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1207/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testDirectoryScannerInFederatedCluster/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1874) Describe how ozoneManagerDoubleBuffer works in ascii art in code

2019-07-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1874:


 Summary: Describe how ozoneManagerDoubleBuffer works in ascii art 
in code
 Key: HDDS-1874
 URL: https://issues.apache.org/jira/browse/HDDS-1874
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This Jira is created based on [~arp] comment on HDDS-1856.

Also we should probably add for the existing fields. I think some ASCII art 
description of how double buffer works will be helpful to future maintainers. 
However it's okay to file a follow up jira and do separately later. Don't need 
to do it for this commit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1874) Describe how ozoneManagerDoubleBuffer works in ascii art in code

2019-07-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1874:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-505

> Describe how ozoneManagerDoubleBuffer works in ascii art in code
> 
>
> Key: HDDS-1874
> URL: https://issues.apache.org/jira/browse/HDDS-1874
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Minor
>
> This Jira is created based on [~arp] comment on HDDS-1856.
> Also we should probably add for the existing fields. I think some ASCII art 
> description of how double buffer works will be helpful to future maintainers. 
> However it's okay to file a follow up jira and do separately later. Don't 
> need to do it for this commit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285438=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285438
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 01:35
Start Date: 31/Jul/19 01:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309008096
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -61,6 +63,10 @@
   private Queue> currentBuffer;
   private Queue> readyBuffer;
 
+
+  private Queue> currentFutureQueue;
 
 Review comment:
   Opened jira for this.
   https://issues.apache.org/jira/browse/HDDS-1874
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285438)
Time Spent: 3.5h  (was: 3h 20m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1874) Describe how ozoneManagerDoubleBuffer works in ascii art in code

2019-07-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1874:
-
Priority: Minor  (was: Major)

> Describe how ozoneManagerDoubleBuffer works in ascii art in code
> 
>
> Key: HDDS-1874
> URL: https://issues.apache.org/jira/browse/HDDS-1874
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Minor
>
> This Jira is created based on [~arp] comment on HDDS-1856.
> Also we should probably add for the existing fields. I think some ASCII art 
> description of how double buffer works will be helpful to future maintainers. 
> However it's okay to file a follow up jira and do separately later. Don't 
> need to do it for this commit.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14679) failed to add erasure code policies with example template

2019-07-30 Thread Yuan Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1689#comment-1689
 ] 

Yuan Zhou commented on HDFS-14679:
--

Hi [~ayushtkn] just updated the fix with a small change on the template(upper 
case -> lower case). 

Thanks, -yuan

> failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14679) failed to add erasure code policies with example template

2019-07-30 Thread Yuan Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Zhou updated HDFS-14679:
-
Description: 
Hi Hadoop developers,

 

Trying to do some quick tests with erasure coding feature and ran into a issue 
on adding policies. The example on adding erasure code policies with provided 
template failed:
{quote}./bin/hdfs ec -addPolicies -policyFile /tmp/user_ec_policies.xml.template
 2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
/tmp/user_ec_policies.xml.template
 Add ErasureCodingPolicy XOR-2-1-128k succeed.
 Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is Codec 
name RS-legacy is not supported
{quote}
The issue seems due to be the mismatching codec(upper case vs lower case). The 
codec is in upper case in the example template[1] while all available codecs 
are lower case[2]. A way to fix maybe just converting the codec to lower case 
when parsing the policy schema. Also attached a simple patch here. 

[1] 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]

[2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]

Thanks, -yuan

  was:
Hi Hadoop developers,

 

Trying to do some quick tests with erasure coding feature and ran into a issue 
on adding policies. The example on adding erasure code policies with provided 
template failed:
{quote}./bin/hdfs ec -addPolicies -policyFile /tmp/user_ec_policies.xml.template
 2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
/tmp/user_ec_policies.xml.template
 Add ErasureCodingPolicy XOR-2-1-128k succeed.
 Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is Codec 
name RS-legacy is not supported
{quote}
The issue seems due to be the mismatching codec(upper case vs lower case). The 
codec is in upper case in the example template[1] while all available codecs 
are lower case[2]. A way to fix maybe just converting the codec to lower case 
when parsing the policy schema. Also attached a simple patch here. 

[1] 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]

[2][https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]

Thanks, -yuan


> failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14679) failed to add erasure code policies with example template

2019-07-30 Thread Yuan Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Zhou updated HDFS-14679:
-
Attachment: HDFS-14679-02.patch

> failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285378=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285378
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 00:37
Start Date: 31/Jul/19 00:37
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308998671
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   No that's fine. Leave it as it is.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285378)
Time Spent: 3h 20m  (was: 3h 10m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1619) Support volume addACL operations for OM HA.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=285375=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285375
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 31/Jul/19 00:35
Start Date: 31/Jul/19 00:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1147: HDDS-1619. 
Support volume addACL operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-516646998
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 799 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 353 | the patch passed |
   | +1 | javac | 353 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 646 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 677 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 286 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2603 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8165 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ae40e9d71f97 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7849bdc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/testReport/ |
   | Max. process+thread count | 3852 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285375)
Time Spent: 4h 20m  (was: 4h 10m)

> Support volume addACL operations for OM HA.
> ---
>
> Key: HDDS-1619
> URL: https://issues.apache.org/jira/browse/HDDS-1619
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu 

[jira] [Commented] (HDFS-14557) JournalNode error: Can't scan a pre-transactional edit log

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896649#comment-16896649
 ] 

Wei-Chiu Chuang commented on HDFS-14557:


Thanks [~sodonnell] really brilliant analysis. Skimmed through the patch and 
looks good. I'll take a more careful look tmr.

> JournalNode error: Can't scan a pre-transactional edit log
> --
>
> Key: HDFS-14557
> URL: https://issues.apache.org/jira/browse/HDFS-14557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14557.001.patch
>
>
> We saw the following error in JournalNodes a few times before.
> {noformat}
> 2016-09-22 12:44:24,505 WARN org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Caught exception after scanning through 0 ops from /data/1/dfs/current/ed
> its_inprogress_0661942 while determining its valid length. 
> Position was 761856
> java.io.IOException: Can't scan a pre-transactional edit log.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4592)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
> at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:193)
> at org.apache.hadoop.hdfs.qjournal.server.Journal.(Journal.java:153)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
> {noformat}
> The edit file was corrupt, and one possible culprit of this error is a full 
> disk. The JournalNode can't recovered and must be resync manually from other 
> JournalNodes. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12914) Block report leases cause missing blocks until next report

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12914:
---
Attachment: HDFS-12914.branch-2.000.patch

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.patch, HDFS-12914.branch-3.0.patch, 
> HDFS-12914.branch-3.1.001.patch, HDFS-12914.branch-3.1.002.patch, 
> HDFS-12914.branch-3.2.patch, HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896645#comment-16896645
 ] 

Wei-Chiu Chuang commented on HDFS-12914:


Submitted a branch-2 patch for precommit check. This is a critical bug fix so I 
think it worths a branch-2 version.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.patch, HDFS-12914.branch-3.0.patch, 
> HDFS-12914.branch-3.1.001.patch, HDFS-12914.branch-3.1.002.patch, 
> HDFS-12914.branch-3.2.patch, HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285356=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285356
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 31/Jul/19 00:06
Start Date: 31/Jul/19 00:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308989717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   This is for a temporary thing. Sooner, all OM uses HA, so this code will be 
removed in later point of time. And in HA case, we don't even use future. So, I 
think this should be okay, let me know you still want to use Optional here?
   
   So, in code if you see where future will be used, we don't need != null 
check.
   https://github.com/apache/hadoop/pull/1166/files in 
OzoneManagerProtocolServerSideTranslatorPB.java
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285356)
Time Spent: 3h 10m  (was: 3h)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896635#comment-16896635
 ] 

Wei-Chiu Chuang edited comment on HDFS-14476 at 7/31/19 12:00 AM:
--

Retract my +1. The patch no longer applies and need to rebase for trunk.
[~seanlook] could you rebase the patch and click "Submit Patch"?


was (Author: jojochuang):
Retract my +1. The patch no longer applies and need to rebase for trunk.

> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HDFS-14476.00.patch, datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896635#comment-16896635
 ] 

Wei-Chiu Chuang commented on HDFS-14476:


Retract my +1. The patch no longer applies and need to rebase for trunk.

> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HDFS-14476.00.patch, datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14569) Result of crypto -listZones is not formatted properly

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14569:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks!

> Result of crypto -listZones is not formatted properly
> -
>
> Key: HDFS-14569
> URL: https://issues.apache.org/jira/browse/HDFS-14569
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14569.patch, image-2019-06-14-17-39-42-244.png
>
>
> hdfs crypto list zones displays zones and keys
> if the zone length + key length is greater than 80 characters 
> key will display only 4 characters in a row which is too small 
> !image-2019-06-14-17-39-42-244.png!
> the result is not formatted
> Increase the size for key



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14419) Avoid repeated calls to the listOpenFiles function

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896632#comment-16896632
 ] 

Wei-Chiu Chuang commented on HDFS-14419:


+1 I'll commit the patch. I think it's counter-productive to block a commit 
because there's no test. 

> Avoid repeated calls to the listOpenFiles function
> --
>
> Key: HDFS-14419
> URL: https://issues.apache.org/jira/browse/HDFS-14419
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14419.001.patch, HDFS-14419.002.patch
>
>
> `hdfs dfsadmin -listOpenFiles -path /any/path` will request all opened files. 
> In the NameNode side, the function 
> LeaseManager.java#getUnderConstructionFiles will be called.
> When there are only N( the conditions, but the leaseManager contains M(>maxListOpenFilesResponses) 
> files, we will scan all leases. Finally, the hasMore will be set true and the 
> openFileEntries contains N files, the scenario will cause listOpenFiles() be 
> called again.
> If M is greater more than N, the two calls to getUnderConstructionFiles will 
> impact the NameNode performance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1856) Make changes required for Non-HA to use new HA code in OM.

2019-07-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1856?focusedWorklogId=285349=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-285349
 ]

ASF GitHub Bot logged work on HDDS-1856:


Author: ASF GitHub Bot
Created on: 30/Jul/19 23:50
Start Date: 30/Jul/19 23:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1174: 
HDDS-1856. Make required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308989717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   This is for a temporary thing. Sooner, all OM uses HA, so this code will be 
removed in later point of time. And in HA case, we don't even use future. So, I 
think this should be okay, let me know you still want to use Optional here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 285349)
Time Spent: 3h  (was: 2h 50m)

> Make changes required for Non-HA to use new HA code in OM.
> --
>
> Key: HDDS-1856
> URL: https://issues.apache.org/jira/browse/HDDS-1856
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In this Jira following things will be implemented:
>  # Make the necessary changes for non-HA code path to use Cache and 
> DoubleBuffer.
>  ## When adding to double buffer, return future. This future will be used in 
> the non-HA path to wait for this, and when it is completed return response to 
> the client.
>  ## Add to double-buffer will happen inside validateAndUpdateCache. In this 
> way, in non-HA, when multiple RPC handler threads are calling preExecute and 
> validateAndUpdateCache, the order inserted in to double buffer will happen in 
> the order requests are received.
>  
> In this Jira, we shall not convert non-ha code path to use this, as security 
> and acl work is not completed to use this new model.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14419) Avoid repeated calls to the listOpenFiles function

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14419:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~daryn] for the comments and [~marvelrock] for the patch contribution!

> Avoid repeated calls to the listOpenFiles function
> --
>
> Key: HDFS-14419
> URL: https://issues.apache.org/jira/browse/HDFS-14419
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14419.001.patch, HDFS-14419.002.patch
>
>
> `hdfs dfsadmin -listOpenFiles -path /any/path` will request all opened files. 
> In the NameNode side, the function 
> LeaseManager.java#getUnderConstructionFiles will be called.
> When there are only N( the conditions, but the leaseManager contains M(>maxListOpenFilesResponses) 
> files, we will scan all leases. Finally, the hasMore will be set true and the 
> openFileEntries contains N files, the scenario will cause listOpenFiles() be 
> called again.
> If M is greater more than N, the two calls to getUnderConstructionFiles will 
> impact the NameNode performance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1834) parent directories not found in secure setup due to ACL check

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896621#comment-16896621
 ] 

Hudson commented on HDDS-1834:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17010 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17010/])
HDDS-1834. parent directories not found in secure setup due to ACL (xyao: rev 
e68d8446c42a883b9cd8a1fa47d870a47db37ad6)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObjInfo.java


> parent directories not found in secure setup due to ACL check
> -
>
> Key: HDDS-1834
> URL: https://issues.apache.org/jira/browse/HDDS-1834
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir 
> -p}} only creates key for the specific directory, not its parents.
> {noformat}
> ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep
> {noformat}
> Previous result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/
> testdir/deep/
> {noformat}
> Current result:
> {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2}
> $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r 
> '.[].keyName'
> testdir/deep/
> {noformat}
> The failure happens on first operation that tries to use {{testdir/}} 
> directly:
> {noformat}
> $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt
> ls: `o3fs://bucket1.fstest/testdir': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896624#comment-16896624
 ] 

Hudson commented on HDFS-14034:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17010 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17010/])
HDFS-14034. Support getQuotaUsage API in WebHDFS. Contributed by Chao (weichiu: 
rev 3ae775d74029b6ae82263739f598ceb25c597dcd)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14449) Expose total number of DT in JMX for Namenode

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896622#comment-16896622
 ] 

Hudson commented on HDFS-14449:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17010 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17010/])
HDFS-14449. Expose total number of DT in JMX for Namenode. Contributed 
(inigoiri: rev 7849bdcf70b8170ad50712dde52bfbd1dfccb28a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java


> Expose total number of DT in JMX for Namenode
> -
>
> Key: HDFS-14449
> URL: https://issues.apache.org/jira/browse/HDFS-14449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14449.001.patch, HDFS-14449.002.patch, 
> HDFS-14449.003.patch, HDFS-14449.004.patch, HDFS-14449.005.patch, 
> HDFS-14449.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896623#comment-16896623
 ] 

Hudson commented on HDFS-13783:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17010 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17010/])
HDFS-13783. Add an option to the Balancer to make it run as a (xkrogen: rev 
1f26cc8705b5af12eefedda019e7ab5c261d9bfb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/BalancerParameters.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


> Balancer: make balancer to be a long service process for easy to monitor it.
> 
>
> Key: HDFS-13783
> URL: https://issues.apache.org/jira/browse/HDFS-13783
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover
>Reporter: maobaolong
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13783-001.patch, HDFS-13783-002.patch, 
> HDFS-13783.003.patch, HDFS-13783.004.patch, HDFS-13783.005.patch, 
> HDFS-13783.006.patch
>
>
> If we have a long service process of balancer, like namenode, datanode, we 
> can get metrics of balancer, the metrics can tell us the status of balancer, 
> the amount of block it has moved, 
> We can get or set the balance plan by the balancer webUI. So many things we 
> can do if we have a long balancer service process.
> So, shall we start to plan the new Balancer? Hope this feature can enter the 
> next release of hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >