[
https://issues.apache.org/jira/browse/HDFS-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15207395#comment-15207395
]
Vladimir Rodionov commented on HDFS-10194:
------------------------------------------
I will try to explain how this affects HBase.
HBase well-known issue is bad behavior under compaction stress. When HBase
Compactor writes new file it uses, of course DFS (HDFS) write API. On every 1MB
written to hdfs, HBase RS JVM allocates 1MB buffer in Eden space. If HBase
writes 100MB sec - its 100MB sec in Eden space. Young GC get triggered more
frequently, which results in false object promotion to tenured space, which
results eventually in long full GC pauses, which, in turn, sometimes results in
RS crash.
This is for CMS.
> FSDataOutputStream.write() allocates new byte buffer on each operation
> ----------------------------------------------------------------------
>
> Key: HDFS-10194
> URL: https://issues.apache.org/jira/browse/HDFS-10194
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Affects Versions: 2.7.1
> Reporter: Vladimir Rodionov
>
> This is the code:
> {code}
> private DFSPacket createPacket(int packetSize, int chunksPerPkt, long
> offsetInBlock, long seqno, boolean lastPacketInBlock) throws
> InterruptedIOException {
> final byte[] buf;
> final int bufferSize = PacketHeader.PKT_MAX_HEADER_LEN + packetSize;
>
> try {
> buf = byteArrayManager.newByteArray(bufferSize);
> } catch (InterruptedException ie) {
> final InterruptedIOException iioe = new InterruptedIOException(
> "seqno=" + seqno);
> iioe.initCause(ie);
> throw iioe;
> }
>
> return new DFSPacket(buf, chunksPerPkt, offsetInBlock, seqno,
> getChecksumSize(), lastPacketInBlock);
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)