[
https://issues.apache.org/jira/browse/HDFS-16610?focusedWorklogId=776566&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-776566
]
ASF GitHub Bot logged work on HDFS-16610:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 31/May/22 21:30
Start Date: 31/May/22 21:30
Worklog Time Spent: 10m
Work Description: sodonnel commented on PR #4384:
URL: https://github.com/apache/hadoop/pull/4384#issuecomment-1142657950
The native client is failing to build, but I cannot see how this change
could cause that. I wonder if there is something else going on, or some other
change has broken the build recently?
```
[INFO] Reactor Summary for Apache Hadoop HDFS Project 3.4.0-SNAPSHOT:
[INFO]
[INFO] Apache Hadoop HDFS Client .......................... SUCCESS [ 34.048
s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [ 57.213
s]
[INFO] Apache Hadoop HDFS Native Client ................... FAILURE [ 7.360
s]
```
Issue Time Tracking
-------------------
Worklog Id: (was: 776566)
Time Spent: 1h (was: 50m)
> Make fsck read timeout configurable
> -----------------------------------
>
> Key: HDFS-16610
> URL: https://issues.apache.org/jira/browse/HDFS-16610
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Reporter: Stephen O'Donnell
> Assignee: Stephen O'Donnell
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> In a cluster with a lot of small files, we encountered a case where fsck was
> very slow. I believe it is due to contention with many other threads reading
> / writing data on the cluster.
> Sometimes fsck does not report any progress for more than 60 seconds and the
> client times out. Currently the connect and read timeout are hardcoded to 60
> seconds. This change is to make them configurable.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]