[ 
https://issues.apache.org/jira/browse/HDDS-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17823293#comment-17823293
 ] 

Pratyush Bhatt commented on HDDS-10043:
---------------------------------------

Oh yeah, in the above trace the property was set to 0B, here is the new trace 
with ozone.client.stream.buffer.increment set as 64KB:
{code:java}
ozone getconf -confKey ozone.client.stream.buffer.increment
64KB {code}
 
{code:java}
Test set: org.ozonehsync.TestOzoneHsync
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 34.719 sec <<< 
FAILURE!
checkOzoneHsync(org.ozonehsync.TestOzoneHsync)  Time elapsed: 31.393 sec  <<< 
ERROR!
org.apache.ratis.thirdparty.io.netty.util.internal.OutOfDirectMemoryError: 
failed to allocate 4194304 byte(s) of direct memory (used: 1774190871, max: 
1778384896)
        at 
org.apache.ratis.thirdparty.io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:843)
        at 
org.apache.ratis.thirdparty.io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:772)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:717)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:692)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena.allocate(PoolArena.java:139)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PoolArena.allocate(PoolArena.java:129)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:395)
        at 
org.apache.ratis.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
        at 
org.apache.hadoop.hdds.utils.db.CodecBuffer.lambda$static$0(CodecBuffer.java:130)
        at 
org.apache.hadoop.hdds.utils.db.CodecBuffer.allocate(CodecBuffer.java:196)
        at 
org.apache.hadoop.hdds.utils.db.CodecBuffer.allocateDirect(CodecBuffer.java:204)
        at 
org.apache.hadoop.ozone.common.IncrementalChunkBuffer.getAndAllocateAtIndex(IncrementalChunkBuffer.java:129)
        at 
org.apache.hadoop.ozone.common.IncrementalChunkBuffer.getAndAllocateAtPosition(IncrementalChunkBuffer.java:142)
        at 
org.apache.hadoop.ozone.common.IncrementalChunkBuffer.put(IncrementalChunkBuffer.java:225)
        at org.apache.hadoop.ozone.common.ChunkBuffer.put(ChunkBuffer.java:107)
        at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:285)
        at 
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:125)
        at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.writeToOutputStream(KeyOutputStream.java:270)
        at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:248)
        at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:228)
        at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:92)
        at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:50)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:62)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
        at org.ozonehsync.TestOzoneHsync.checkOzoneHsync(TestOzoneHsync.java:85)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code}

> java.lang.OutOfMemoryError on FSDataOutputStream write ops
> ----------------------------------------------------------
>
>                 Key: HDDS-10043
>                 URL: https://issues.apache.org/jira/browse/HDDS-10043
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Pratyush Bhatt
>            Priority: Major
>
> Seems like FSDataOutputStream utilise higher heap memory.
> Doing write Ops in ~300 FSDataOutputStream objects sequentially is causing 
> {color:#FF0000}_java.lang.OutOfMemoryError: Java heap space_ 
> {color}{color:#FF0000}error. {color}
> While the FSDataOutputStream on HDFS doesn't throws heap issues, tried with 
> ~3000 objects on same Environment, works fine.
> Note: Both the tests were performed in same environment.
> Client is a Kube container, specs:
> {code:java}
>       limits:
>         cpu: "1"
>         ephemeral-storage: 5G
>         memory: 300M
>       requests:
>         cpu: 200m
>         ephemeral-storage: 1G
>         memory: 200M {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to