[jira] [Commented] (SPARK-21517) Fetch local data via block manager cause oom
[ https://issues.apache.org/jira/browse/SPARK-21517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099685#comment-16099685 ] zhoukang commented on SPARK-21517: -- [~kiszk] In our production cluster we use 1.6.1 and 2.1.0 which all reproduce this case.However we has not upgrade to 2.2. But i compared with 2.2,the logic here has not been modified. > Fetch local data via block manager cause oom > > > Key: SPARK-21517 > URL: https://issues.apache.org/jira/browse/SPARK-21517 > Project: Spark > Issue Type: Improvement > Components: Block Manager, Spark Core >Affects Versions: 1.6.1, 2.1.0 >Reporter: zhoukang > > In our production cluster,oom happens when NettyBlockRpcServer receive > OpenBlocks message.The reason we observed is below: > When BlockManagerManagedBuffer call ChunkedByteBuffer#toNetty, it will use > Unpooled.wrappedBuffer(ByteBuffer... buffers) which use default > maxNumComponents=16 in low-level CompositeByteBuf.When our component's number > is bigger than 16, it will execute during buffer copy. > {code:java} > private void consolidateIfNeeded() { > int numComponents = this.components.size(); > if(numComponents > this.maxNumComponents) { > int capacity = > ((CompositeByteBuf.Component)this.components.get(numComponents - > 1)).endOffset; > ByteBuf consolidated = this.allocBuffer(capacity); > for(int c = 0; c < numComponents; ++c) { > CompositeByteBuf.Component c1 = > (CompositeByteBuf.Component)this.components.get(c); > ByteBuf b = c1.buf; > consolidated.writeBytes(b); > c1.freeIfNecessary(); > } > CompositeByteBuf.Component var7 = new > CompositeByteBuf.Component(consolidated); > var7.endOffset = var7.length; > this.components.clear(); > this.components.add(var7); > } > } > {code} > in CompositeByteBuf which will consume some memory during buffer copy. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-21517) Fetch local data via block manager cause oom
[ https://issues.apache.org/jira/browse/SPARK-21517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099654#comment-16099654 ] Kazuaki Ishizaki commented on SPARK-21517: -- Does it occur in Spark 2.2? > Fetch local data via block manager cause oom > > > Key: SPARK-21517 > URL: https://issues.apache.org/jira/browse/SPARK-21517 > Project: Spark > Issue Type: Improvement > Components: Block Manager, Spark Core >Affects Versions: 1.6.1, 2.1.0 >Reporter: zhoukang > > In our production cluster,oom happens when NettyBlockRpcServer receive > OpenBlocks message.The reason we observed is below: > When BlockManagerManagedBuffer call ChunkedByteBuffer#toNetty, it will use > Unpooled.wrappedBuffer(ByteBuffer... buffers) which use default > maxNumComponents=16 in low-level CompositeByteBuf.When our component's number > is bigger than 16, it will execute during buffer copy. > {code:java} > private void consolidateIfNeeded() { > int numComponents = this.components.size(); > if(numComponents > this.maxNumComponents) { > int capacity = > ((CompositeByteBuf.Component)this.components.get(numComponents - > 1)).endOffset; > ByteBuf consolidated = this.allocBuffer(capacity); > for(int c = 0; c < numComponents; ++c) { > CompositeByteBuf.Component c1 = > (CompositeByteBuf.Component)this.components.get(c); > ByteBuf b = c1.buf; > consolidated.writeBytes(b); > c1.freeIfNecessary(); > } > CompositeByteBuf.Component var7 = new > CompositeByteBuf.Component(consolidated); > var7.endOffset = var7.length; > this.components.clear(); > this.components.add(var7); > } > } > {code} > in CompositeByteBuf which will consume some memory during buffer copy. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-21517) Fetch local data via block manager cause oom
[ https://issues.apache.org/jira/browse/SPARK-21517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099412#comment-16099412 ] zhoukang commented on SPARK-21517: -- Can any one help verify the patch related to this issue? Thanks too much > Fetch local data via block manager cause oom > > > Key: SPARK-21517 > URL: https://issues.apache.org/jira/browse/SPARK-21517 > Project: Spark > Issue Type: Improvement > Components: Block Manager, Spark Core >Affects Versions: 1.6.1, 2.1.0 >Reporter: zhoukang > > In our production cluster,oom happens when NettyBlockRpcServer receive > OpenBlocks message.The reason we observed is below: > When BlockManagerManagedBuffer call ChunkedByteBuffer#toNetty, it will use > Unpooled.wrappedBuffer(ByteBuffer... buffers) which use default > maxNumComponents=16 in low-level CompositeByteBuf.When our component's number > is bigger than 16, it will execute during buffer copy. > {code:java} > private void consolidateIfNeeded() { > int numComponents = this.components.size(); > if(numComponents > this.maxNumComponents) { > int capacity = > ((CompositeByteBuf.Component)this.components.get(numComponents - > 1)).endOffset; > ByteBuf consolidated = this.allocBuffer(capacity); > for(int c = 0; c < numComponents; ++c) { > CompositeByteBuf.Component c1 = > (CompositeByteBuf.Component)this.components.get(c); > ByteBuf b = c1.buf; > consolidated.writeBytes(b); > c1.freeIfNecessary(); > } > CompositeByteBuf.Component var7 = new > CompositeByteBuf.Component(consolidated); > var7.endOffset = var7.length; > this.components.clear(); > this.components.add(var7); > } > } > {code} > in CompositeByteBuf which will consume some memory during buffer copy. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-21517) Fetch local data via block manager cause oom
[ https://issues.apache.org/jira/browse/SPARK-21517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099411#comment-16099411 ] zhoukang commented on SPARK-21517: -- Can any one help verify the patch related to this issue? > Fetch local data via block manager cause oom > > > Key: SPARK-21517 > URL: https://issues.apache.org/jira/browse/SPARK-21517 > Project: Spark > Issue Type: Improvement > Components: Block Manager, Spark Core >Affects Versions: 1.6.1, 2.1.0 >Reporter: zhoukang > > In our production cluster,oom happens when NettyBlockRpcServer receive > OpenBlocks message.The reason we observed is below: > When BlockManagerManagedBuffer call ChunkedByteBuffer#toNetty, it will use > Unpooled.wrappedBuffer(ByteBuffer... buffers) which use default > maxNumComponents=16 in low-level CompositeByteBuf.When our component's number > is bigger than 16, it will execute during buffer copy. > {code:java} > private void consolidateIfNeeded() { > int numComponents = this.components.size(); > if(numComponents > this.maxNumComponents) { > int capacity = > ((CompositeByteBuf.Component)this.components.get(numComponents - > 1)).endOffset; > ByteBuf consolidated = this.allocBuffer(capacity); > for(int c = 0; c < numComponents; ++c) { > CompositeByteBuf.Component c1 = > (CompositeByteBuf.Component)this.components.get(c); > ByteBuf b = c1.buf; > consolidated.writeBytes(b); > c1.freeIfNecessary(); > } > CompositeByteBuf.Component var7 = new > CompositeByteBuf.Component(consolidated); > var7.endOffset = var7.length; > this.components.clear(); > this.components.add(var7); > } > } > {code} > in CompositeByteBuf which will consume some memory during buffer copy. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-21517) Fetch local data via block manager cause oom
[ https://issues.apache.org/jira/browse/SPARK-21517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098074#comment-16098074 ] Apache Spark commented on SPARK-21517: -- User 'caneGuy' has created a pull request for this issue: https://github.com/apache/spark/pull/18723 > Fetch local data via block manager cause oom > > > Key: SPARK-21517 > URL: https://issues.apache.org/jira/browse/SPARK-21517 > Project: Spark > Issue Type: Improvement > Components: Block Manager, Spark Core >Affects Versions: 1.6.1, 2.1.0 >Reporter: zhoukang > > In our production cluster,oom happens when NettyBlockRpcServer receive > OpenBlocks message.The reason we observed is below: > When BlockManagerManagedBuffer call ChunkedByteBuffer#toNetty, it will use > Unpooled.wrappedBuffer(ByteBuffer... buffers) which use default > maxNumComponents=16 in low-level CompositeByteBuf.When our component's number > is bigger than 16, it till execute > {code:java} > private void consolidateIfNeeded() { > int numComponents = this.components.size(); > if(numComponents > this.maxNumComponents) { > int capacity = > ((CompositeByteBuf.Component)this.components.get(numComponents - > 1)).endOffset; > ByteBuf consolidated = this.allocBuffer(capacity); > for(int c = 0; c < numComponents; ++c) { > CompositeByteBuf.Component c1 = > (CompositeByteBuf.Component)this.components.get(c); > ByteBuf b = c1.buf; > consolidated.writeBytes(b); > c1.freeIfNecessary(); > } > CompositeByteBuf.Component var7 = new > CompositeByteBuf.Component(consolidated); > var7.endOffset = var7.length; > this.components.clear(); > this.components.add(var7); > } > } > {code} > in CompositeByteBuf which will consume some memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org