[ 
https://issues.apache.org/jira/browse/HDDS-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16799733#comment-16799733
 ] 

Mukul Kumar Singh commented on HDDS-1317:
-----------------------------------------

Thanks for working on this [~shashikant]. I am still reviewing the added test 
cases. Please find my comments as following.

1) BlockID:77, StringBuffer -> StringBuilder
2) BlockOutputStream:50, wild card imports
3) BlockOutputStream:109, lets remove "of data for ".
4) BlockOutputStream:185, extra line, lets revert this.
5) BufferPool:44, lets rename getBuffer to getCurrentBuffer
6) BufferPool:77, In place to comparing the length, we should also verify that 
we have the same reference.
7) TestBlockOutputStream:155, after the flush, the buffer will be returned back 
to the pool, lets assert that the position is set to zero.
8) TestBlockOutputStream:152, 156, 159, comments needs to be corrected
9) Can these tests also verify the number of blocks and chunks written ? 
10) TestBlockOutputStream,testFlushChunk:214, before flush, two chunks should 
have already been written and put chunk should also be issued. Lets add an 
assert on the number of ops. This will help in diffrentiating between 
testFlushChunk & testMutiChunkWrite.
The metrics are present in XceiverClientMetrics to verify these information.
11) TestBlockOutputStream:241, testMutiChunkWrite -> testMultiChunkWrite
12) TestBlockOutputStream:327, the comments needs to be corrected


> KeyOutputStream#write throws ArrayIndexOutOfBoundsException when running 
> RandomWrite MR examples
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDDS-1317
>                 URL: https://issues.apache.org/jira/browse/HDDS-1317
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>    Affects Versions: 0.4.0
>            Reporter: Xiaoyu Yao
>            Assignee: Shashikant Banerjee
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDDS-1317.000.patch, HDDS-1317.001.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> Repro steps:
> {code} 
> jar $HADOOP_MAPRED_HOME/hadoop-mapreduce-examples-*.jar randomwriter 
> -Dtest.randomwrite.total_bytes=10000000  o3fs://bucket1.vol1/randomwrite.out
> {code}
>  
> Error Stack:
> {code}
> 2019-03-20 19:02:37 INFO Job:1686 - Task Id : 
> attempt_1553108378906_0002_m_000000_0, Status : FAILED
> Error: java.lang.ArrayIndexOutOfBoundsException: -5
>  at java.util.ArrayList.elementData(ArrayList.java:422)
>  at java.util.ArrayList.get(ArrayList.java:435)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.getBuffer(BufferPool.java:45)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBufferIfNeeded(BufferPool.java:59)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:215)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:130)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:311)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:273)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:46)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>  at java.io.DataOutputStream.write(DataOutputStream.java:107)
>  at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1444)
>  at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
>  at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:670)
>  at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>  at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:199)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:165)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to