[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Affects Version/s: (was: 2.4.0)
   2.3.0

> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.3.0
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
> Histogram:
> !histo.jpg!
> dominator_tree:
> !dominator.jpg!
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Affects Version/s: (was: 2.2.3)
   2.4.0

> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.4.0
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
> Histogram:
> !histo.jpg!
> dominator_tree:
> !dominator.jpg!
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Description: 
NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
occupied about 2.8G by looking at Histogram of Mat, and those Entries were hold 
by ChannelOutboundBuffer by looking at dominator_tree of mat. By analyzing  one 
fo ChannelOutboundBuffer object, I found there were 248867 entries in the 
object of ChannelOutboundBuffer (ChannelOutboundBuffer#flushed=248867), and  
ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
ChannelHandler seems not check unwritable flag when write message, and finally 
NodeManager occurs OOM.

Histogram:

!histo.jpg!

dominator_tree:
!dominator.jpg!


 

  was:
NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
occupied about 2.8G by looking at Histogram of Mat, and those Entries were hold 
by ChannelOutboundBuffer by looking at dominator_tree of mat. By analyzing  one 
fo ChannelOutboundBuffer object, I found there were 248867 entries in the 
object of ChannelOutboundBuffer (ChannelOutboundBuffer#flushed=248867), and  
ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
ChannelHandler seems not check unwritable flag when write message, and finally 
NodeManager occurs OOM.

Histogram:

!histo.jpg!

dominator_tree:

 


> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.2.3
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
> Histogram:
> !histo.jpg!
> dominator_tree:
> !dominator.jpg!
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Description: 
NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
occupied about 2.8G by looking at Histogram of Mat, and those Entries were hold 
by ChannelOutboundBuffer by looking at dominator_tree of mat. By analyzing  one 
fo ChannelOutboundBuffer object, I found there were 248867 entries in the 
object of ChannelOutboundBuffer (ChannelOutboundBuffer#flushed=248867), and  
ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
ChannelHandler seems not check unwritable flag when write message, and finally 
NodeManager occurs OOM.

Histogram:

!histo.jpg!

dominator_tree:

 

  was:
NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
occupied about 2.8G by looking at Histogram of Mat, and those Entries were hold 
by ChannelOutboundBuffer by looking at dominator_tree of mat. By analyzing  one 
fo ChannelOutboundBuffer object, I found there were 248867 entries in the 
object of ChannelOutboundBuffer (ChannelOutboundBuffer#flushed=248867), and  
ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
ChannelHandler seems not check unwritable flag when write message, and finally 
NodeManager occurs OOM.

 


> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.2.3
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
> Histogram:
> !histo.jpg!
> dominator_tree:
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Attachment: histo.jpg

> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.2.3
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has to many entry

2019-08-15 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated SPARK-28743:
--
Attachment: dominator.jpg

> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has 
> to many entry
> ---
>
> Key: SPARK-28743
> URL: https://issues.apache.org/jira/browse/SPARK-28743
> Project: Spark
>  Issue Type: Bug
>  Components: Shuffle
>Affects Versions: 2.2.3
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry 
> occupied about 2.8G by looking at Histogram of Mat, and those Entries were 
> hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By 
> analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 
> entries in the object of ChannelOutboundBuffer 
> (ChannelOutboundBuffer#flushed=248867), and  
> ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than 
> highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But 
> ChannelHandler seems not check unwritable flag when write message, and 
> finally NodeManager occurs OOM.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org