[
https://issues.apache.org/jira/browse/HDFS-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300987#comment-16300987
]
SammiChen commented on HDFS-12860:
----------------------------------
Hi [~eddyxu], thanks for report the issue and work on it.
1. It's great to add error message to provide more information when
Precondition check fails. There are "%d" used in String.format and "%s" used
in Preconditions. Is it because Preconditions doesn't support "%s"?
2. ")" is missed in {{AlignedStripe.toString}} and {{StripingCell.toString}}
3. Can you add some javadoc or comment in
{{testDivideOneStripeLargeBlockSize}}? If we want to test block group larger
than 2GB, use the RS-6-3-1024k as an example, the {{stripSize}} is 9 * 1MB,
{{stripesPerBlock}} will be > (2 * 1024) / 9M, {{blockSize}} is {{cellSize *
stripesPerBlock}}. Also I would suggest add a end-to-end test case in
{{TestErasureCodingPolicies}}.
> StripedBlockUtil#getRangesInternalBlocks throws exception for the block group
> size larger than 2GB
> --------------------------------------------------------------------------------------------------
>
> Key: HDFS-12860
> URL: https://issues.apache.org/jira/browse/HDFS-12860
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: erasure-coding
> Affects Versions: 3.0.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12860.00.patch
>
>
> Running terasort on a cluster with 8 datanodes, 256GB data, using
> RS-3-2-1024k.
> The test data was generated by {{teragen}} with 32 mappers.
> The terasort benchmark fails with the following stack trace:
> {code}
> 17/11/27 14:44:31 INFO mapreduce.Job: map 45% reduce 0%
> 17/11/27 14:44:33 INFO mapreduce.Job: Task Id :
> attempt_1510080297865_0160_m_000008_0, Status : FAILED
> Error: java.lang.IllegalArgumentException
> at
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
> at
> org.apache.hadoop.hdfs.util.StripedBlockUtil$VerticalRange.<init>(StripedBlockUtil.java:701)
> at
> org.apache.hadoop.hdfs.util.StripedBlockUtil.getRangesForInternalBlocks(StripedBlockUtil.java:442)
> at
> org.apache.hadoop.hdfs.util.StripedBlockUtil.divideOneStripe(StripedBlockUtil.java:311)
> at
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:308)
> at
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:391)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:813)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at
> org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:257)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:562)
> at
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]