[
https://issues.apache.org/jira/browse/HDDS-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17505062#comment-17505062
]
Uma Maheswara Rao G commented on HDDS-6356:
-------------------------------------------
Hi [~cchenax] , Could you please confirm whether this issue can be re-produced
in distributed cluster?
If this is coming only in local mode, then we can focus to check at client
cache key generation? Key generation should not be an issue as the host will be
different.
> EC: the offset is less than writeoffset
> ---------------------------------------
>
> Key: HDDS-6356
> URL: https://issues.apache.org/jira/browse/HDDS-6356
> Project: Apache Ozone
> Issue Type: Sub-task
> Components: EC
> Reporter: chen chao
> Assignee: cchenaxchen
> Priority: Major
>
> I use the lastest code
> create bucket command is:
> bin/ozone sh volume create vol1
> bin/ozone sh bucket create vol1/defaultbucket --layout=FILE_SYSTEM_OPTIMIZED
> -t EC -r rs-3-2-1024k
>
> I run the mapreduce
> bin/hadoop jar
> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.1-tests.jar
> TestDFSIO -Dtest.build.data=o3fs://defaultbucket.vol1/dfsio/test2 -write
> -nrFiles 3 -size 10000MB
>
> Error: java.lang.IllegalArgumentException: Expected writeOffset= 1616000000
> Expected offset=1613758464
> at
> org.apache.hadoop.ozone.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:144)
> at
> org.apache.hadoop.ozone.client.io.ECKeyOutputStream.close(ECKeyOutputStream.java:539)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:136)
> at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
>
> when I add some logs
> so the offset is less than writeOffset
> and the stripeSize is equal numDataBlks * ecChunkSize, it is a problem
> 2022-02-23 16:16:01,347 INFO [main]
> org.apache.hadoop.ozone.client.io.KeyOutputStream: name = main lastStripeSize
> is 3145728 writeOffset is 4838000000 offset is 4834983936
> 2022-02-23 16:16:01,349 WARN [main] org.apache.hadoop.mapred.YarnChild:
> Exception running child : java.lang.IllegalArgumentException
> at
> org.apache.hadoop.ozone.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:130)
> at
> org.apache.hadoop.ozone.client.io.ECKeyOutputStream.close(ECKeyOutputStream.java:550)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:136)
> at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
>
>
> 2022-02-28 11:44:36,096 [ChunkReader-6] INFO
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler: Operation:
> WriteChunk , Trace ID: , Message: Chunk file offset 1048576 does not match
> blockFile length 372736 , Result: CHUNK_FILE_INCONSISTENCY ,
> StorageContainerException Occurred.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
> Chunk file offset 1048576 does not match blockFile length 372736
> at
> org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.validateChunkSize(ChunkUtils.java:387)
> at
> org.apache.hadoop.ozone.container.keyvalue.impl.FilePerBlockStrategy.writeChunk(FilePerBlockStrategy.java:140)
> at
> org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerDispatcher.writeChunk(ChunkManagerDispatcher.java:74)
> at
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleWriteChunk(KeyValueHandler.java:746)
> at
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.dispatchRequest(KeyValueHandler.java:223)
> at
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:187)
> at
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:307)
> at
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.lambda$dispatch$0(HddsDispatcher.java:169)
> at
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:87)
> at
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:168)
> at
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:57)
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]