[ https://issues.apache.org/jira/browse/HADOOP-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran resolved HADOOP-18216. ------------------------------------- Resolution: Fixed documentation is updated. PR discusses a followup with safety check on use > Document "io.file.buffer.size" must be greater than zero > -------------------------------------------------------- > > Key: HADOOP-18216 > URL: https://issues.apache.org/jira/browse/HADOOP-18216 > Project: Hadoop Common > Issue Type: Bug > Reporter: Jingxuan Fu > Assignee: Jingxuan Fu > Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > when the configuration file in the "io.file.buffer.size" field is set to a > value less than or equal to zero, hdfs can start normally, but read and write > data will have problems. > When the value is less than zero, the shell will throw the following > exception: > {code:java} > hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred > -cat: Fatal internal error > java.lang.NegativeArraySizeException: -4096 > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:93) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129) > at > org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101) > at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303) > at > org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285) > at > org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269) > at > org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120) > at org.apache.hadoop.fs.shell.Command.run(Command.java:176) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:391){code} > When the value is equal to zero, the shell command will always block > {code:java} > hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred > ^Z > [2]+ Stopped ./hdfs dfs -cat mapred{code} > The description of the configuration file is not clear enough, it may make > people think that set to 0 to enter the non-blocking mode. > > {code:java} > <property> > <name>io.file.buffer.size</name> > <value>4096</value> > <description>The size of buffer for use in sequence files. > The size of this buffer should probably be a multiple of hardware > page size (4096 on Intel x86), and it determines how much data is > buffered during read and write operations.</description> > </property>{code} > > Considering that this value is uesd by hdfs and mapreduce frequently, we > should make this value must be a number greater than zero. > -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org