[jira] [Work logged] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?focusedWorklogId=668801=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668801
 ]

ASF GitHub Bot logged work on HDFS-16281:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 05:15
Start Date: 22/Oct/21 05:15
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3574:
URL: https://github.com/apache/hadoop/pull/3574#issuecomment-949293724


   > There are some more tests, which just failed on timeout here: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3574/1/testReport/
   > 
   > Can you sort them up as well
   
   Thanks @ayushtkn for the information. I'd be happy to change them all. And 
I'll do that later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668801)
Time Spent: 40m  (was: 0.5h)

> Fix flaky unit tests failed due to timeout
> --
>
> Key: HDFS-16281
> URL: https://issues.apache.org/jira/browse/HDFS-16281
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I found that this unit test 
> *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* failed several times due 
> to timeout. Can we change the timeout for some methods from _*3s*_ to *_30s_* 
> to be consistent with the other methods?
> {code:java}
> [ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
> 65.39 s <<< FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 s <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  
> testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
>   Time elapsed: 4.132 s  <<< 
> ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 
> 3000 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1535) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1432) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>  at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  at com.sun.proxy.$Proxy27.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658) at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
>  at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542) 
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
>  at 
> 

[jira] [Work logged] (HDFS-16280) Fix typo for ShortCircuitReplica#isStale

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16280?focusedWorklogId=668798=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668798
 ]

ASF GitHub Bot logged work on HDFS-16280:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 05:09
Start Date: 22/Oct/21 05:09
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3568:
URL: https://github.com/apache/hadoop/pull/3568#issuecomment-949291314


   Thanks @ayushtkn for your review and merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668798)
Time Spent: 1h  (was: 50m)

> Fix typo for ShortCircuitReplica#isStale
> 
>
> Key: HDFS-16280
> URL: https://issues.apache.org/jira/browse/HDFS-16280
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fix typo for ShortCircuitReplica#isStale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16282) Duplicate generic usage information to hdfs debug command

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16282?focusedWorklogId=668777=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668777
 ]

ASF GitHub Bot logged work on HDFS-16282:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 03:46
Start Date: 22/Oct/21 03:46
Worklog Time Spent: 10m 
  Work Description: cndaimin opened a new pull request #3576:
URL: https://github.com/apache/hadoop/pull/3576


   This patch is tested by hdfs debug command:
   BEFORE:
   ```
   ~ $ hdfs debug
   Usage: hdfs debug  [arguments]
   
   These commands are for advanced users only.
   
   Incorrect usages may result in data loss. Use at your own risk.
   
   verifyMeta -meta  [-block ]
   
   Generic options supported are:
   -conf specify an application configuration file
   -Ddefine a value for a given property
   -fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
   -jt   specify a ResourceManager
   -files specify a comma-separated list of files to 
be copied to the map reduce cluster
   -libjarsspecify a comma-separated list of jar 
files to be included in the classpath
   -archives   specify a comma-separated list of archives 
to be unarchived on the compute machines
   
   The general command line syntax is:
   command [genericOptions] [commandOptions]
   
   computeMeta -block  -out 
   
   Generic options supported are:
   -conf specify an application configuration file
   -Ddefine a value for a given property
   -fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
   -jt   specify a ResourceManager
   -files specify a comma-separated list of files to 
be copied to the map reduce cluster
   -libjarsspecify a comma-separated list of jar 
files to be included in the classpath
   -archives   specify a comma-separated list of archives 
to be unarchived on the compute machines
   
   The general command line syntax is:
   command [genericOptions] [commandOptions]
   
   recoverLease -path  [-retries ]
   
   Generic options supported are:
   -conf specify an application configuration file
   -Ddefine a value for a given property
   -fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
   -jt   specify a ResourceManager
   -files specify a comma-separated list of files to 
be copied to the map reduce cluster
   -libjarsspecify a comma-separated list of jar 
files to be included in the classpath
   -archives   specify a comma-separated list of archives 
to be unarchived on the compute machines
   
   The general command line syntax is:
   command [genericOptions] [commandOptions]
   
   
   Generic options supported are:
   -conf specify an application configuration file
   -Ddefine a value for a given property
   -fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
   -jt   specify a ResourceManager
   -files specify a comma-separated list of files to 
be copied to the map reduce cluster
   -libjarsspecify a comma-separated list of jar 
files to be included in the classpath
   -archives   specify a comma-separated list of archives 
to be unarchived on the compute machines
   
   The general command line syntax is:
   command [genericOptions] [commandOptions]
   ```
   
   AFTER:
   ```
   ~ $ hdfs debug
   Usage: hdfs debug  [arguments]
   
   These commands are for advanced users only.
   
   Incorrect usages may result in data loss. Use at your own risk.
   
   verifyMeta -meta  [-block ]
   computeMeta -block  -out 
   recoverLease -path  [-retries ]
   verifyEC -file 
   
   Generic options supported are:
   -conf specify an application configuration file
   -Ddefine a value for a given property
   -fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
   -jt   specify a ResourceManager
   -files specify a comma-separated list of files to 
be copied to the map reduce cluster
   -libjarsspecify a comma-separated list of jar 
files to be included in the classpath
   -archives   specify a comma-separated list of archives 
to be unarchived on the compute machines
   
   The general command line syntax is:
   command [genericOptions] [commandOptions]
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:

[jira] [Updated] (HDFS-16282) Duplicate generic usage information to hdfs debug command

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16282:
--
Labels: pull-request-available  (was: )

> Duplicate generic usage information to hdfs debug command
> -
>
> Key: HDFS-16282
> URL: https://issues.apache.org/jira/browse/HDFS-16282
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.0, 3.3.1
>Reporter: daimin
>Assignee: daimin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we type 'hdfs debug' in console, the generic usage information will be 
> repeated 4 times and the target command like 'verifyMeta' or 'recoverLease' 
> seems hard to find.
> {quote}~ $ hdfs debug
> Usage: hdfs debug  [arguments]
> These commands are for advanced users only.
> Incorrect usages may result in data loss. Use at your own risk.
> verifyMeta -meta  [-block ]
> Generic options supported are:
> -conf  specify an application configuration file
> -D  define a value for a given property
> -fs  specify default filesystem URL to use, 
> overrides 'fs.defaultFS' property from configurations.
> -jt  specify a ResourceManager
> -files  specify a comma-separated list of files to be copied to 
> the map reduce cluster
> -libjars  specify a comma-separated list of jar files to be 
> included in the classpath
> -archives  specify a comma-separated list of archives to be 
> unarchived on the compute machines
> The general command line syntax is:
> command [genericOptions] [commandOptions]
> computeMeta -block  -out 
> Generic options supported are:
> -conf  specify an application configuration file
> -D  define a value for a given property
> -fs  specify default filesystem URL to use, 
> overrides 'fs.defaultFS' property from configurations.
> -jt  specify a ResourceManager
> -files  specify a comma-separated list of files to be copied to 
> the map reduce cluster
> -libjars  specify a comma-separated list of jar files to be 
> included in the classpath
> -archives  specify a comma-separated list of archives to be 
> unarchived on the compute machines
> The general command line syntax is:
> command [genericOptions] [commandOptions]
> recoverLease -path  [-retries ]
> Generic options supported are:
> -conf  specify an application configuration file
> -D  define a value for a given property
> -fs  specify default filesystem URL to use, 
> overrides 'fs.defaultFS' property from configurations.
> -jt  specify a ResourceManager
> -files  specify a comma-separated list of files to be copied to 
> the map reduce cluster
> -libjars  specify a comma-separated list of jar files to be 
> included in the classpath
> -archives  specify a comma-separated list of archives to be 
> unarchived on the compute machines
> The general command line syntax is:
> command [genericOptions] [commandOptions]
> Generic options supported are:
> -conf  specify an application configuration file
> -D  define a value for a given property
> -fs  specify default filesystem URL to use, 
> overrides 'fs.defaultFS' property from configurations.
> -jt  specify a ResourceManager
> -files  specify a comma-separated list of files to be copied to 
> the map reduce cluster
> -libjars  specify a comma-separated list of jar files to be 
> included in the classpath
> -archives  specify a comma-separated list of archives to be 
> unarchived on the compute machines
> The general command line syntax is:
> command [genericOptions] [commandOptions]
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16282) Duplicate generic usage information to hdfs debug command

2021-10-21 Thread daimin (Jira)
daimin created HDFS-16282:
-

 Summary: Duplicate generic usage information to hdfs debug command
 Key: HDFS-16282
 URL: https://issues.apache.org/jira/browse/HDFS-16282
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.3.1, 3.3.0
Reporter: daimin
Assignee: daimin


When we type 'hdfs debug' in console, the generic usage information will be 
repeated 4 times and the target command like 'verifyMeta' or 'recoverLease' 
seems hard to find.
{quote}~ $ hdfs debug
Usage: hdfs debug  [arguments]

These commands are for advanced users only.

Incorrect usages may result in data loss. Use at your own risk.

verifyMeta -meta  [-block ]

Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

computeMeta -block  -out 

Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

recoverLease -path  [-retries ]

Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]


Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]
{quote}
 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread guo (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17432780#comment-17432780
 ] 

guo commented on HDFS-16277:


Thanks [~ayushtkn] for you kindly review, glad to meet hadoop here:)

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=668768=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668768
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 03:00
Start Date: 22/Oct/21 03:00
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-949246786


   I haven't gone through the entire discussion/code. Just that whether we 
should modify the existing field or add a new one. Technically both are correct 
and I don't see any serious issue with either(not thinking to deep). But I feel 
for the parsers to adapt, if there was a new field, it might be little bit more 
easy, Rather than trying to figure out whether the existing field has  a port 
or not. Just my thoughts, I am Ok with whichever way most people tend to agree.
   Anyway whatever we do should be optional & guarded by a config.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668768)
Time Spent: 3.5h  (was: 3h 20m)

> Add remote port information to HDFS audit log
> -
>
> Key: HDFS-16266
> URL: https://issues.apache.org/jira/browse/HDFS-16266
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In our production environment, we occasionally encounter a problem where a 
> user submits an abnormal computation task, causing a sudden flood of 
> requests, which causes the queueTime and processingTime of the Namenode to 
> rise very high, causing a large backlog of tasks.
> We usually locate and kill specific Spark, Flink, or MapReduce tasks based on 
> metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but 
> there is no port information, so it is difficult to locate specific processes 
> sometimes. Therefore, I propose that we add the port information to the audit 
> log, so that we can easily track the upstream process.
> Currently, some projects contain port information in audit logs, such as 
> Hbase and Alluxio. I think it is also necessary to add port information for 
> HDFS audit logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16280) Fix typo for ShortCircuitReplica#isStale

2021-10-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16280.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix typo for ShortCircuitReplica#isStale
> 
>
> Key: HDFS-16280
> URL: https://issues.apache.org/jira/browse/HDFS-16280
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix typo for ShortCircuitReplica#isStale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16280) Fix typo for ShortCircuitReplica#isStale

2021-10-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17432778#comment-17432778
 ] 

Ayush Saxena commented on HDFS-16280:
-

Committed to trunk. Thanx [~tomscut] for the contribution!!!

> Fix typo for ShortCircuitReplica#isStale
> 
>
> Key: HDFS-16280
> URL: https://issues.apache.org/jira/browse/HDFS-16280
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix typo for ShortCircuitReplica#isStale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16280) Fix typo for ShortCircuitReplica#isStale

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16280?focusedWorklogId=668766=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668766
 ]

ASF GitHub Bot logged work on HDFS-16280:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:52
Start Date: 22/Oct/21 02:52
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3568:
URL: https://github.com/apache/hadoop/pull/3568#issuecomment-949243928


   Merged. Thanx @tomscut for the contribution and @ferhui for the review!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668766)
Time Spent: 50m  (was: 40m)

> Fix typo for ShortCircuitReplica#isStale
> 
>
> Key: HDFS-16280
> URL: https://issues.apache.org/jira/browse/HDFS-16280
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix typo for ShortCircuitReplica#isStale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16280) Fix typo for ShortCircuitReplica#isStale

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16280?focusedWorklogId=668765=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668765
 ]

ASF GitHub Bot logged work on HDFS-16280:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:51
Start Date: 22/Oct/21 02:51
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #3568:
URL: https://github.com/apache/hadoop/pull/3568


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668765)
Time Spent: 40m  (was: 0.5h)

> Fix typo for ShortCircuitReplica#isStale
> 
>
> Key: HDFS-16280
> URL: https://issues.apache.org/jira/browse/HDFS-16280
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix typo for ShortCircuitReplica#isStale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?focusedWorklogId=668764=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668764
 ]

ASF GitHub Bot logged work on HDFS-16281:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:49
Start Date: 22/Oct/21 02:49
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3574:
URL: https://github.com/apache/hadoop/pull/3574#issuecomment-949243054


   There are some more tests, which just failed on timeout here:
   https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3574/1/testReport/
   
   Can you sort them up as well


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668764)
Time Spent: 0.5h  (was: 20m)

> Fix flaky unit tests failed due to timeout
> --
>
> Key: HDFS-16281
> URL: https://issues.apache.org/jira/browse/HDFS-16281
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I found that this unit test 
> *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* failed several times due 
> to timeout. Can we change the timeout for some methods from _*3s*_ to *_30s_* 
> to be consistent with the other methods?
> {code:java}
> [ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
> 65.39 s <<< FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 s <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  
> testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
>   Time elapsed: 4.132 s  <<< 
> ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 
> 3000 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1535) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1432) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>  at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  at com.sun.proxy.$Proxy27.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658) at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
>  at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542) 
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
>  at 
> 

[jira] [Resolved] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16277.
-
Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17432774#comment-17432774
 ] 

Ayush Saxena commented on HDFS-16277:
-

Committed to trunk and branch-3.3.

Have added [~philipse] as HDFS Contributor and assigned the jira.

Thanx [~philipse] for the contribution and welcome to Hadoop. :) 

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668761=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668761
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:32
Start Date: 22/Oct/21 02:32
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-949237284


   Thanks again @ayushtkn @prasad-acit for your kindly review. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668761)
Time Spent: 5h 10m  (was: 5h)

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668760=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668760
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:31
Start Date: 22/Oct/21 02:31
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-949236776


   Thanx @GuoPhilipse for the contribution, Have merged the PR.
   Yeps, the build is ok. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668760)
Time Spent: 5h  (was: 4h 50m)

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668759=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668759
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 02:29
Start Date: 22/Oct/21 02:29
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668759)
Time Spent: 4h 50m  (was: 4h 40m)

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-16277:
---

Assignee: guo

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Assignee: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16277) Improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16277:

Summary: Improve decision in AvailableSpaceBlockPlacementPolicy  (was: 
improve decision in AvailableSpaceBlockPlacementPolicy)

> Improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668710=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668710
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 22/Oct/21 00:02
Start Date: 22/Oct/21 00:02
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on a change in pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#discussion_r734123264



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/AvailableSpaceRackFaultTolerantBlockPlacementPolicy.java
##
@@ -54,6 +57,10 @@ public void initialize(Configuration conf, FSClusterStats 
stats,
 
DFS_NAMENODE_AVAILABLE_SPACE_RACK_FAULT_TOLERANT_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_PREFERENCE_FRACTION_KEY,
 
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_RACK_FAULT_TOLERANT_PLACEMENT_POLICY_BALANCED_SPACE_PREFERENCE_FRACTION_DEFAULT);
 
+balancedSpaceTolerance = conf.getInt(
+
DFS_NAMENODE_AVAILABLE_SPACE_RACK_FAULT_TOLERANT_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_KEY,
+
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_RACK_FAULT_TOLERANT_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_DEFAULT);

Review comment:
   Thanks @prasad-acit for your advice ,it should be used for total new 
code , for now,  if we cut ir for short, the variable name meaning may not that 
clear,maybe we can igonre it, and apply the short name as far as possible in 
furture.  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668710)
Time Spent: 4h 40m  (was: 4.5h)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668705
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 23:54
Start Date: 21/Oct/21 23:54
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-949084796


   > @GuoPhilipse I have retriggred the build here: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/13/console More 
or less looks like an Infra issue If it works, cool, if not I will try to 
figure that out. It is on me. :-)
   
   Cool,it shows +1 overall, but still have failed check in report ,not sure if 
it is normal ? @ayushtkn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668705)
Time Spent: 4.5h  (was: 4h 20m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?focusedWorklogId=668661=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668661
 ]

ASF GitHub Bot logged work on HDFS-16281:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 21:27
Start Date: 21/Oct/21 21:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3574:
URL: https://github.com/apache/hadoop/pull/3574#issuecomment-949015172


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 445m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3574/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 544m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3574/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3574 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 13b08d7194cf 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 441215b9de8d9d03f94b8e19f1a2cee39cfc47d4 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 

[jira] [Resolved] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2021-10-21 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-7612.
---
Fix Version/s: 3.2.4
   3.3.2
   2.10.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

I just committed this to the four active branches.
Congratulations [~mkuchenbecker]!

> TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir
> -
>
> Key: HDFS-7612
> URL: https://issues.apache.org/jira/browse/HDFS-7612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Michael Kuchenbecker
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.3.2, 3.2.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> final String cacheDir = System.getProperty("test.cache.data",
> "build/test/cache");
> {code}
> results in
> {{FileNotFoundException: build/test/cache/editsStoredParsed.xml (No such file 
> or directory)}}
> when {{test.cache.data}} is not set.
> I can see this failing while running in Eclipse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2021-10-21 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reassigned HDFS-7612:
-

Assignee: Michael Kuchenbecker

> TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir
> -
>
> Key: HDFS-7612
> URL: https://issues.apache.org/jira/browse/HDFS-7612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Michael Kuchenbecker
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> final String cacheDir = System.getProperty("test.cache.data",
> "build/test/cache");
> {code}
> results in
> {{FileNotFoundException: build/test/cache/editsStoredParsed.xml (No such file 
> or directory)}}
> when {{test.cache.data}} is not set.
> I can see this failing while running in Eclipse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-7612?focusedWorklogId=668631=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668631
 ]

ASF GitHub Bot logged work on HDFS-7612:


Author: ASF GitHub Bot
Created on: 21/Oct/21 20:27
Start Date: 21/Oct/21 20:27
Worklog Time Spent: 10m 
  Work Description: shvachko commented on pull request #3571:
URL: https://github.com/apache/hadoop/pull/3571#issuecomment-948978302


   +1 fixes the test for me


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668631)
Time Spent: 0.5h  (was: 20m)

> TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir
> -
>
> Key: HDFS-7612
> URL: https://issues.apache.org/jira/browse/HDFS-7612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> final String cacheDir = System.getProperty("test.cache.data",
> "build/test/cache");
> {code}
> results in
> {{FileNotFoundException: build/test/cache/editsStoredParsed.xml (No such file 
> or directory)}}
> when {{test.cache.data}} is not set.
> I can see this failing while running in Eclipse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668620=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668620
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 19:42
Start Date: 21/Oct/21 19:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948946779


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 51s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/13/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 8 new + 230 unchanged 
- 1 fixed = 238 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 224m 15s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 322m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3559 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint |
   | uname | Linux d084898ca220 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7d5be4e32e772a732e222d058552b79f42bfafbe |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Updated] (HDFS-16261) Configurable grace period around invalidation of replaced blocks

2021-10-21 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HDFS-16261:
-
Summary: Configurable grace period around invalidation of replaced blocks  
(was: Configurable grace period around deletion of invalidated blocks)

> Configurable grace period around invalidation of replaced blocks
> 
>
> Key: HDFS-16261
> URL: https://issues.apache.org/jira/browse/HDFS-16261
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>
> When a block is moved with REPLACE_BLOCK, the new location is recorded in the 
> NameNode and the NameNode instructs the old host to in invalidate the block 
> using DNA_INVALIDATE. As it stands today, this invalidation is async but 
> tends to happen relatively quickly.
> I'm working on a feature for HBase which enables efficient healing of 
> locality through Balancer-style low level block moves (HBASE-26250). One 
> issue is that HBase tends to keep open long running DFSInputStreams and 
> moving blocks from under them causes lots of warns in the RegionServer and 
> increases long tail latencies due to the necessary retries in the DFSClient.
> One way I'd like to fix this is to provide a configurable grace period on 
> async invalidations. This would give the DFSClient enough time to refresh 
> block locations before hitting any errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16278) Make HDFS snapshot tools cross platform

2021-10-21 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17432622#comment-17432622
 ] 

Íñigo Goiri commented on HDFS-16278:


Thanks [~gautham] for the patch.
Merged PR 3563 to trunk.

> Make HDFS snapshot tools cross platform
> ---
>
> Key: HDFS-16278
> URL: https://issues.apache.org/jira/browse/HDFS-16278
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_createSnapshot*, *hdfs_disallowSnapshot* and 
> *hdfs_renameSnapshot* uses getopt for parsing the command line arguments. 
> getopt is available only on Linux and thus, isn't cross platform. We need to 
> replace getopt with boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16278) Make HDFS snapshot tools cross platform

2021-10-21 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-16278.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Make HDFS snapshot tools cross platform
> ---
>
> Key: HDFS-16278
> URL: https://issues.apache.org/jira/browse/HDFS-16278
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_createSnapshot*, *hdfs_disallowSnapshot* and 
> *hdfs_renameSnapshot* uses getopt for parsing the command line arguments. 
> getopt is available only on Linux and thus, isn't cross platform. We need to 
> replace getopt with boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16278) Make HDFS snapshot tools cross platform

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16278?focusedWorklogId=668558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668558
 ]

ASF GitHub Bot logged work on HDFS-16278:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 17:24
Start Date: 21/Oct/21 17:24
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #3563:
URL: https://github.com/apache/hadoop/pull/3563


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668558)
Time Spent: 1h 10m  (was: 1h)

> Make HDFS snapshot tools cross platform
> ---
>
> Key: HDFS-16278
> URL: https://issues.apache.org/jira/browse/HDFS-16278
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_createSnapshot*, *hdfs_disallowSnapshot* and 
> *hdfs_renameSnapshot* uses getopt for parsing the command line arguments. 
> getopt is available only on Linux and thus, isn't cross platform. We need to 
> replace getopt with boost::program_options to make these tools cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668489
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 15:19
Start Date: 21/Oct/21 15:19
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948721675


   > @GuoPhilipse I have retriggred the build here: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/13/console More 
or less looks like an Infra issue If it works, cool, if not I will try to 
figure that out. It is on me. :-)
   
   Realy appreciate it! @ayushtkn :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668489)
Time Spent: 4h 10m  (was: 4h)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668460=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668460
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 14:24
Start Date: 21/Oct/21 14:24
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948670394


   @GuoPhilipse I have retriggred the build here:
   https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/13/console
   More or less looks like an Infra issue
   If it works, cool, if not I will try to figure that out. It is on me. :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668460)
Time Spent: 4h  (was: 3h 50m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668441
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:59
Start Date: 21/Oct/21 13:59
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on a change in pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#discussion_r733708491



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/AvailableSpaceRackFaultTolerantBlockPlacementPolicy.java
##
@@ -54,6 +57,10 @@ public void initialize(Configuration conf, FSClusterStats 
stats,
 
DFS_NAMENODE_AVAILABLE_SPACE_RACK_FAULT_TOLERANT_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_PREFERENCE_FRACTION_KEY,
 
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_RACK_FAULT_TOLERANT_PLACEMENT_POLICY_BALANCED_SPACE_PREFERENCE_FRACTION_DEFAULT);
 
+balancedSpaceTolerance = conf.getInt(
+
DFS_NAMENODE_AVAILABLE_SPACE_RACK_FAULT_TOLERANT_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_KEY,
+
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_RACK_FAULT_TOLERANT_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_DEFAULT);

Review comment:
   Constants exceed the length lead to length issue, can be shrinked. 
BLOCK_PLACEMENT_POLICY => BPP can be used. Other places already large, atleast 
new code we can target.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668441)
Time Spent: 3h 50m  (was: 3h 40m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668438=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668438
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:48
Start Date: 21/Oct/21 13:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948636284


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 24s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/12/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 25s | 

[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668426=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668426
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:33
Start Date: 21/Oct/21 13:33
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948622637


   It seems github issue ?
   the mvn error message was the following
   `ERROR: Failed to write github status. Token expired or missing repo:status 
write?`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668426)
Time Spent: 3.5h  (was: 3h 20m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668422
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:31
Start Date: 21/Oct/21 13:31
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on a change in pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#discussion_r733679533



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/AvailableSpaceBlockPlacementPolicy.java
##
@@ -77,6 +86,16 @@ public void initialize(Configuration conf, FSClusterStats 
stats,
   + " is less than 0.5 so datanodes with more used percent will"
   + " receive  more block allocations.");
 }
+
+if (balancedSpaceTolerance > 20 || balancedSpaceTolerance < 0) {
+  LOG.warn("The value of "
+  + 
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_KEY

Review comment:
   thanks @prasad-acit ,seems more clear for the warning message




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668422)
Time Spent: 3h 20m  (was: 3h 10m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668409
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:25
Start Date: 21/Oct/21 13:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948615491


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 24s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 24s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 25s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/11/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 25s | 

[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668401=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668401
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 13:15
Start Date: 21/Oct/21 13:15
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on a change in pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#discussion_r733664093



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/AvailableSpaceBlockPlacementPolicy.java
##
@@ -77,6 +86,16 @@ public void initialize(Configuration conf, FSClusterStats 
stats,
   + " is less than 0.5 so datanodes with more used percent will"
   + " receive  more block allocations.");
 }
+
+if (balancedSpaceTolerance > 20 || balancedSpaceTolerance < 0) {
+  LOG.warn("The value of "
+  + 
DFS_NAMENODE_AVAILABLE_SPACE_BLOCK_PLACEMENT_POLICY_BALANCED_SPACE_TOLERANCE_KEY

Review comment:
   It would be great, if you can print the use configured value here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668401)
Time Spent: 3h  (was: 2h 50m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668380
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 12:38
Start Date: 21/Oct/21 12:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948573705


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/10/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 22s | 

[jira] [Work logged] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?focusedWorklogId=668352=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668352
 ]

ASF GitHub Bot logged work on HDFS-16281:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 12:20
Start Date: 21/Oct/21 12:20
Worklog Time Spent: 10m 
  Work Description: tomscut opened a new pull request #3574:
URL: https://github.com/apache/hadoop/pull/3574


   JIRA: [HDFS-16281](https://issues.apache.org/jira/browse/HDFS-16281)
   
   I found that this unit test `TestViewFileSystemOverloadSchemeWithHdfsScheme` 
failed several times due to timeout. Can we change the timeout for some methods 
from `3s` to `30s` to be consistent with the other methods?
   
   `[ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
65.39 s <<< FAILURE! - in 
org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS
   [ERROR] 
testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
  Time elapsed: 4.132 s  <<< ERROR!
   org.junit.runners.model.TestTimedOutException: test timed out after 3000 
milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
at org.apache.hadoop.ipc.Client.call(Client.java:1535)
at org.apache.hadoop.ipc.Client.call(Client.java:1432)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
at com.sun.proxy.$Proxy26.setTimes(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
at com.sun.proxy.$Proxy27.setTimes(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
at 
org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542)
at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.writeString(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:685)
at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.testNflyRepair(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:622)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 

[jira] [Updated] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16281:
--
Labels: pull-request-available  (was: )

> Fix flaky unit tests failed due to timeout
> --
>
> Key: HDFS-16281
> URL: https://issues.apache.org/jira/browse/HDFS-16281
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I found that this unit test 
> *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* failed several times due 
> to timeout. Can we change the timeout for some methods from _*3s*_ to *_30s_* 
> to be consistent with the other methods?
> {code:java}
> [ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
> 65.39 s <<< FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 s <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
>  
> testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
>   Time elapsed: 4.132 s  <<< 
> ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 
> 3000 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1535) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1432) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>  at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  at com.sun.proxy.$Proxy27.setTimes(Unknown Source) at 
> org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658) at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
>  at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542) 
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
>  at 
> org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at 
> org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.writeString(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:685)
>  at 
> org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.testNflyRepair(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:622)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> 

[jira] [Updated] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-16281:
---
Description: 
I found that this unit test *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* 
failed several times due to timeout. Can we change the timeout for some methods 
from _*3s*_ to *_30s_* to be consistent with the other methods?
{code:java}
[ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 
s <<< FAILURE! - in 
org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
 Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 s <<< 
FAILURE! - in 
org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
 
testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
  Time elapsed: 4.132 s  <<< 
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 3000 
milliseconds at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577) at 
org.apache.hadoop.ipc.Client.call(Client.java:1535) at 
org.apache.hadoop.ipc.Client.call(Client.java:1432) at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
 at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
 at com.sun.proxy.$Proxy27.setTimes(Unknown Source) at 
org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658) at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
 at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542) 
at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
 at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
 at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
 at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
 at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) 
at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.writeString(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:685)
 at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.testNflyRepair(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:622)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.lang.Thread.run(Thread.java:748)
{code}
 

 

  was:
I found that this unit test *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* 
failed several times due to timeout. Can we change the timeout for some methods 
from _*3s*_ to *_30s_* to be consistent 

[jira] [Created] (HDFS-16281) Fix flaky unit tests failed due to timeout

2021-10-21 Thread tomscut (Jira)
tomscut created HDFS-16281:
--

 Summary: Fix flaky unit tests failed due to timeout
 Key: HDFS-16281
 URL: https://issues.apache.org/jira/browse/HDFS-16281
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: tomscut
Assignee: tomscut


I found that this unit test *_TestViewFileSystemOverloadSchemeWithHdfsScheme_* 
failed several times due to timeout. Can we change the timeout for some methods 
from _*3s*_ to *_30s_* to be consistent with the other methods?

 

 
{code:java}
[ERROR] Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 
s <<< FAILURE! - in 
org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
 Tests run: 19, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 65.39 s <<< 
FAILURE! - in 
org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS[ERROR]
 
testNflyRepair(org.apache.hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS)
  Time elapsed: 4.132 s  <<< 
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 3000 
milliseconds at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577) at 
org.apache.hadoop.ipc.Client.call(Client.java:1535) at 
org.apache.hadoop.ipc.Client.call(Client.java:1432) at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
 at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:1059)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
 at com.sun.proxy.$Proxy27.setTimes(Unknown Source) at 
org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2658) at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1978)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$37.doCall(DistributedFileSystem.java:1975)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1988)
 at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:542) 
at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.setTimes(ChRootedFileSystem.java:328)
 at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.commit(NflyFSystem.java:439)
 at 
org.apache.hadoop.fs.viewfs.NflyFSystem$NflyOutputStream.close(NflyFSystem.java:395)
 at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
 at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) 
at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.writeString(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:685)
 at 
org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme.testNflyRepair(TestViewFileSystemOverloadSchemeWithHdfsScheme.java:622)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.lang.Thread.run(Thread.java:748)
{code}
 

 



--
This message was sent by Atlassian Jira

[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668341=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668341
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 12:09
Start Date: 21/Oct/21 12:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948549468


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m 27s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/9/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 22s | 

[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668331
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 11:56
Start Date: 21/Oct/21 11:56
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948538645


   > Jenkins have reported some checkstyle warnings 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
   > 
   > They seems to be for line length, Can you check once, I guess the ones 
reported from test can be fixed. Couple of them are because the variable name 
is big, I think we can ignore them, as there is no good way to get rid of them.
   > 
   > Apart from that. Changes LGTM
   
   Thanks @ayushtkn ,checkstyle has been fixed.except for the long variable 
name ones.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668331)
Time Spent: 2.5h  (was: 2h 20m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=668307=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668307
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 11:26
Start Date: 21/Oct/21 11:26
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-948516568


   > Thanks for updating the PR, @tomscut.
   > 
   > * I discussed with my colleagues, and they suggested that adding a new 
port field would have less impact on users who are analyzing the audit logs 
instead of expanding the existing IP field. What do you think?
   > * After [HDFS-13293](https://issues.apache.org/jira/browse/HDFS-13293), 
Router is forwarding client IP via CallerContext. How about adding the 
client-side port to the CallerContext as well? Maybe we can consider it in 
another JIRA.
   
   Thanks @tasanuma and your colleagues for your good advice. And sorry for the 
late reply.
   
   ```I discussed with my colleagues, and they suggested that adding a new port 
field would have less impact on users who are analyzing the audit logs instead 
of expanding the existing IP field. What do you think?```
   I think it would be nice to put the port in a separate field, but adding the 
port to the IP field is optional at the moment, so I'm a little confused which 
way is more appropriate. I'd like to ask a few other committers to look at this 
and give some suggestions. Anyway, I will update the PR in time.
   @aajisaka @iwasakims @ayushtkn @ferhui @Hexiaoqiao @goiri @jojochuang Could 
you please take a look at this and give some suggestions? Thank you very much!
   
   
   
   ```After [HDFS-13293](https://issues.apache.org/jira/browse/HDFS-13293), 
Router is forwarding client IP via CallerContext. How about adding the 
client-side port to the CallerContext as well? Maybe we can consider it in 
another JIRA.```
   I would like to open a new JIRA to do this. Thank you for pointing this out.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668307)
Time Spent: 3h 20m  (was: 3h 10m)

> Add remote port information to HDFS audit log
> -
>
> Key: HDFS-16266
> URL: https://issues.apache.org/jira/browse/HDFS-16266
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In our production environment, we occasionally encounter a problem where a 
> user submits an abnormal computation task, causing a sudden flood of 
> requests, which causes the queueTime and processingTime of the Namenode to 
> rise very high, causing a large backlog of tasks.
> We usually locate and kill specific Spark, Flink, or MapReduce tasks based on 
> metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but 
> there is no port information, so it is difficult to locate specific processes 
> sometimes. Therefore, I propose that we add the port information to the audit 
> log, so that we can easily track the upstream process.
> Currently, some projects contain port information in audit logs, such as 
> Hbase and Alluxio. I think it is also necessary to add port information for 
> HDFS audit logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668280
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 10:49
Start Date: 21/Oct/21 10:49
Worklog Time Spent: 10m 
  Work Description: GuoPhilipse commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948489020


   @ayushtkn Thanks your timely review, comments have been improved and tests 
are ok.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668280)
Time Spent: 2h 20m  (was: 2h 10m)

> improve decision in AvailableSpaceBlockPlacementPolicy
> --
>
> Key: HDFS-16277
> URL: https://issues.apache.org/jira/browse/HDFS-16277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.3.1
>Reporter: guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Hi 
> In product environment,we may meet two or more datanode usage reaches nealy 
> 100%,for exampe 99.99%,98%,97%.
> if we configure `AvailableSpaceBlockPlacementPolicy` , we also have the 
> chance to choose the 99.99%(assume it is the highest usage),for we treat the 
> two choosen datanode as the same usage if their storage usage are different 
> within 5%.
> but this is not what we want, so i suggest we can improve the decision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16277) improve decision in AvailableSpaceBlockPlacementPolicy

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16277?focusedWorklogId=668256=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668256
 ]

ASF GitHub Bot logged work on HDFS-16277:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 10:04
Start Date: 21/Oct/21 10:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3559:
URL: https://github.com/apache/hadoop/pull/3559#issuecomment-948455272


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 52s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 16 new + 230 unchanged 
- 1 fixed = 246 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 227m 53s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 323m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3559/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3559 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint |
   | uname | Linux cabb3364f533 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 91b8d430f8e7e85062300f4c8e3019b46477ac2c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Resolved] (HDFS-16270) Improve NNThroughputBenchmark#printUsage() related to block size

2021-10-21 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu resolved HDFS-16270.
-
Resolution: Not A Problem

> Improve NNThroughputBenchmark#printUsage() related to block size
> 
>
> Key: HDFS-16270
> URL: https://issues.apache.org/jira/browse/HDFS-16270
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When using the NNThroughputBenchmark test, if the usage is not correct, we 
> will get some prompt messages.
> E.g:
> '
> If connecting to a remote NameNode with -fs option, 
> dfs.namenode.fs-limits.min-block-size should be set to 16.
> 21/10/13 11:55:32 INFO util.ExitUtil: Exiting with status -1: ExitException
> '
> Yes, this way is good.
> However, the setting of'dfs.blocksize' has been completed before execution, 
> for example:
> conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
> We will still get the above prompt, which is wrong.
> At the same time, it should also be explained. The hint here should not be 
> for'dfs.namenode.fs-limits.min-block-size', but should be'dfs.blocksize'.
> Because in the NNThroughputBenchmark construction, 
> the'dfs.namenode.fs-limits.min-block-size' has been set to 0 in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=668144=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-668144
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 21/Oct/21 08:18
Start Date: 21/Oct/21 08:18
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-948371693


   Thanks for updating the PR, @tomscut.
   - I discussed with my colleagues, and they suggested that adding a new port 
field would have less impact on users who are analyzing the audit logs instead 
of expanding the existing IP field. What do you think?
   - After HDFS-13293, Router is forwarding client IP via CallerContext. How 
about adding the client-side port to the CallerContext as well? Maybe we can 
consider it in another JIRA.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 668144)
Time Spent: 3h 10m  (was: 3h)

> Add remote port information to HDFS audit log
> -
>
> Key: HDFS-16266
> URL: https://issues.apache.org/jira/browse/HDFS-16266
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> In our production environment, we occasionally encounter a problem where a 
> user submits an abnormal computation task, causing a sudden flood of 
> requests, which causes the queueTime and processingTime of the Namenode to 
> rise very high, causing a large backlog of tasks.
> We usually locate and kill specific Spark, Flink, or MapReduce tasks based on 
> metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but 
> there is no port information, so it is difficult to locate specific processes 
> sometimes. Therefore, I propose that we add the port information to the audit 
> log, so that we can easily track the upstream process.
> Currently, some projects contain port information in audit logs, such as 
> Hbase and Alluxio. I think it is also necessary to add port information for 
> HDFS audit logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org