[
https://issues.apache.org/jira/browse/HADOOP-18216?focusedWorklogId=762485&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762485
]
ASF GitHub Bot logged work on HADOOP-18216:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 26/Apr/22 18:52
Start Date: 26/Apr/22 18:52
Worklog Time Spent: 10m
Work Description: steveloughran commented on PR #4220:
URL: https://github.com/apache/hadoop/pull/4220#issuecomment-1110139088
@Hexiaoqiao yes, you can add a hadoop.util.Precondition check where it is
loaded, but do it in a new PR. let's merge this doc change in first, as it self
contained and low risk to backport (unless the xml is broken, safe)
Issue Time Tracking
-------------------
Worklog Id: (was: 762485)
Time Spent: 1h 10m (was: 1h)
> Ensure "io.file.buffer.size" is greater than zero. Otherwise, it will lead to
> data read/write blockage
> ------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-18216
> URL: https://issues.apache.org/jira/browse/HADOOP-18216
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Jingxuan Fu
> Assignee: Jingxuan Fu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> when the configuration file in the "io.file.buffer.size" field is set to a
> value less than or equal to zero, hdfs can start normally, but read and write
> data will have problems.
> When the value is less than zero, the shell will throw the following
> exception:
> {code:java}
> hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
> -cat: Fatal internal error
> java.lang.NegativeArraySizeException: -4096
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:93)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
> at
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
> at
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
> at
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391){code}
> When the value is equal to zero, the shell command will always block
> {code:java}
> hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
> ^Z
> [2]+ Stopped ./hdfs dfs -cat mapred{code}
> The description of the configuration file is not clear enough, it may make
> people think that set to 0 to enter the non-blocking mode.
>
> {code:java}
> <property>
> <name>io.file.buffer.size</name>
> <value>4096</value>
> <description>The size of buffer for use in sequence files.
> The size of this buffer should probably be a multiple of hardware
> page size (4096 on Intel x86), and it determines how much data is
> buffered during read and write operations.</description>
> </property>{code}
>
> Considering that this value is uesd by hdfs and mapreduce frequently, we
> should make this value must be a number greater than zero.
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]