[
https://issues.apache.org/jira/browse/HDDS-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17823277#comment-17823277
]
Tsz-wo Sze edited comment on HDDS-10043 at 3/4/24 6:14 PM:
-----------------------------------------------------------
bq. at
org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBuffer(BufferPool.java:93)
The line is
{code}
//BufferPool.java: 93
final ChunkBuffer newBuffer = ChunkBuffer.allocate(bufferSize, increment);
{code}
It seems the increment was not working. Are you sure that the following conf
was set to 64k at the client side?
- ozone.client.stream.buffer.increment
was (Author: szetszwo):
bq. at
org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBuffer(BufferPool.java:93)
> java.lang.OutOfMemoryError on FSDataOutputStream write ops
> ----------------------------------------------------------
>
> Key: HDDS-10043
> URL: https://issues.apache.org/jira/browse/HDDS-10043
> Project: Apache Ozone
> Issue Type: Bug
> Components: Ozone Client
> Reporter: Pratyush Bhatt
> Priority: Major
>
> Seems like FSDataOutputStream utilise higher heap memory.
> Doing write Ops in ~300 FSDataOutputStream objects sequentially is causing
> {color:#FF0000}_java.lang.OutOfMemoryError: Java heap space_
> {color}{color:#FF0000}error. {color}
> While the FSDataOutputStream on HDFS doesn't throws heap issues, tried with
> ~3000 objects on same Environment, works fine.
> Note: Both the tests were performed in same environment.
> Client is a Kube container, specs:
> {code:java}
> limits:
> cpu: "1"
> ephemeral-storage: 5G
> memory: 300M
> requests:
> cpu: 200m
> ephemeral-storage: 1G
> memory: 200M {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]