[
https://issues.apache.org/jira/browse/HDFS-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005198#comment-17005198
]
zhangbutao commented on HDFS-15085:
-----------------------------------
[~ferhui] Extra test info are added, and you can try to repo the exception. As
far as I'm concerned, the parquet and text data can be recovery correctly when
one datanode is shutdown. It seems that small files can have this problem.
Thanks!
> Erasure Coding: some ORC data can not be recovery when partial DataNodes
> are shut down
> ----------------------------------------------------------------------------------------
>
> Key: HDFS-15085
> URL: https://issues.apache.org/jira/browse/HDFS-15085
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ec
> Affects Versions: 3.1.0
> Reporter: zhangbutao
> Priority: Major
> Attachments: orcfile
>
>
> Test environment: hadoop version 3.1.0, 5 datanode
> step to repo:
> 1: Set the ec policy RS-3-2-1024k on all of hdfs paths:
> hdfs ec -setPolicy -path / RS-3-2-1024k
> 2.Put the small orc file into hdfs on host which is running datanode dn1:
> hdfs dfs -put orcfile /tmp/testec/
> 3.Shut down the datanode dn1, and execute the following command to verify the
> orc data:
> hive --orcfiledump /tmp/testec/orcfile
> 4. The error log should be output on the client side:
> {code:java}
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException:
> Invalid buffer, not of length 974814
> at
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkOutputBuffers(ByteBufferDecodingState.java:138)
> at
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.<init>(ByteBufferDecodingState.java:48)
> at
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
> at
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
> at
> org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:423)
> at
> org.apache.hadoop.hdfs.PositionStripeReader.decode(PositionStripeReader.java:74)
> at
> org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:382)
> at
> org.apache.hadoop.hdfs.DFSStripedInputStream.fetchBlockByteRange(DFSStripedInputStream.java:479)
> at
> org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1442)
> at
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1400)
> at
> org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
> at
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
> at
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:557)
> at
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:276)
> at
> org.apache.orc.impl.RecordReaderImpl.readAllDataStreams(RecordReaderImpl.java:1099)
> at
> org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1055)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1208)
> at
> org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1243)
> at
> org.apache.orc.impl.RecordReaderImpl.<init>(RecordReaderImpl.java:273)
> at org.apache.orc.impl.ReaderImpl.rows(ReaderImpl.java:633)
> at org.apache.orc.impl.ReaderImpl.rows(ReaderImpl.java:627)
> at org.apache.orc.tools.FileDump.printMetaDataImpl(FileDump.java:309)
> at org.apache.orc.tools.FileDump.printMetaData(FileDump.java:274)
> at org.apache.orc.tools.FileDump.main(FileDump.java:135)
> at org.apache.orc.tools.FileDump.main(FileDump.java:142)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:308)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:222)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]