[ 
https://issues.apache.org/jira/browse/HBASE-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15213853#comment-15213853
 ] 

Lars Hofhansl commented on HBASE-15506:
---------------------------------------

Let's be careful, please. Many times when one tries to be smarter than the GC 
it backfires (false lifetimes of reused objects, etc). Allocating stuff on the 
heap is not necessary bad by itself. That's what the heap for. :)

It's only bad when there's a _measurable_ performance impact (including long GC 
pauses)... Or if the allocation is strictly unnecessary.

Which of the two option is using more memory bandwidth? That's the metric I'd 
be interested in.

In any case, does this hold for G1 as well?

It's possible I am missing something. We're talking about 64KB buffers 
allocating for the HDFS stream packets, right? 100MB of garbage in 64KB chunks 
is a mere 1600 objects. Surely we're not going to be worried about that.

Are compactions or HFileWriter doing something stupid?

Since there is fix in HDFS-7276, I clearly must be missing something. But what 
is that?


> FSDataOutputStream.write() allocates new byte buffer on each operation
> ----------------------------------------------------------------------
>
>                 Key: HBASE-15506
>                 URL: https://issues.apache.org/jira/browse/HBASE-15506
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Vladimir Rodionov
>            Assignee: Vladimir Rodionov
>
> Deep inside stack trace in DFSOutputStream.createPacket.
> This should be opened in HDFS. This JIRA is to track HDFS work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to