[ 
https://issues.apache.org/jira/browse/HDDS-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDDS-10361.
------------------------------------
    Fix Version/s: HDDS-7593
       Resolution: Fixed

> [hsync] Output stream should support direct byte buffer
> -------------------------------------------------------
>
>                 Key: HDDS-10361
>                 URL: https://issues.apache.org/jira/browse/HDDS-10361
>             Project: Apache Ozone
>          Issue Type: Sub-task
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: HDDS-7593
>
>
> I'm trying to cherrypick HDDS-9843. Ozone client high memory (heap) 
> utilization (#6153) from master to HDDS-7593 dev branch. But it's giving me 
> this error
> {noformat}
> Failed to flush. error: null
> java.lang.UnsupportedOperationException
>       at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.appendLastChunkBuffer(BlockOutputStream.java:858)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.updateBlockDataForWriteChunk(BlockOutputStream.java:814)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:769)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:565)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlushInternal(BlockOutputStream.java:598)
>       at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:573)
>       at 
> org.apache.hadoop.hdds.scm.storage.RatisBlockOutputStream.hsync(RatisBlockOutputStream.java:139)
>       at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.hsync(BlockOutputStreamEntry.java:158)
>       at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:551)
>       at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:514)
>       at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.hsync(KeyOutputStream.java:484)
>       at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.hsync(OzoneOutputStream.java:118)
>       at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.hsync(OzoneFSOutputStream.java:70)
>       at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.hflush(OzoneFSOutputStream.java:65)
>       at 
> org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:136)
>       at 
> org.apache.hadoop.hbase.io.asyncfs.WrapperAsyncFSOutput.flush0(WrapperAsyncFSOutput.java:92)
>       at 
> org.apache.hadoop.hbase.io.asyncfs.WrapperAsyncFSOutput.lambda$flush$0(WrapperAsyncFSOutput.java:113)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The incremental chunk list feature assumes heap byte buffer. But HDDS-9843 
> requires direct byte buffer. We should leverage direct byte buffer. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to