Gargi-jais11 commented on code in PR #9166:
URL: https://github.com/apache/ozone/pull/9166#discussion_r2454075630


##########
hadoop-hdds/common/src/main/resources/ozone-default.xml:
##########
@@ -465,6 +465,18 @@
     <description>Socket timeout for Ozone client. Unit could be defined with
       postfix (ns,ms,s,m,h,d)</description>
   </property>
+  <property>
+    <name>ozone.client.elastic.byte.buffer.pool.max.size</name>
+    <value>16GB</value>
+    <tag>OZONE, CLIENT</tag>
+    <description>
+      The maximum total size of buffers that can be cached in the client-side
+      ByteBufferPool. This pool is used heavily during EC read and write 
operations.
+      Setting a limit prevents unbounded memory growth in long-lived rpc 
clients
+      like the S3 Gateway. Once this limit is reached, used buffers are not
+      put back to the pool and will be garbage collected.

Review Comment:
   In Java, we can't deallocate memory manually (like free() in C/C++). The 
only way to free memory is to remove all references to an object and let the 
Garbage Collector (GC) reclaim it.
   When our pool is full, by returning without storing the buffer, we are doing 
exactly that. The buffer becomes "unreachable," and the GC will handle its 
deallocation.
   
   So, I believe while we are still relying on the GC (which is unavoidable in 
Java), it's for a much smaller fraction of objects, which is exactly the fix we 
want to reduce overall memory and GC pressure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to