duhanmin commented on a change in pull request #23278: [SPARK-24920][Core]
Allow sharing Netty's memory pool allocators
URL: https://github.com/apache/spark/pull/23278#discussion_r242102517
##########
File path:
common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
##########
@@ -265,6 +265,16 @@ public boolean saslServerAlwaysEncrypt() {
return conf.getBoolean("spark.network.sasl.serverAlwaysEncrypt", false);
}
+ /**
+ * Flag indicating whether to share the pooled ByteBuf allocators between
the different Netty
+ * channels. If enabled then only two pooled ByteBuf allocators are created:
one where caching
+ * is allowed (for transport servers) and one where not (for transport
clients).
+ * When disabled a new allocator is created for each transport servers and
clients.
+ */
+ public Boolean sharedByteBufAllocators() {
+ return conf.getBoolean("spark.network.sharedByteBufAllocators.enabled",
true);
Review comment:
@attilapiros
Sorry, my English is very bad.
We are using Structured Streaming, mainly to do some json parsing
calculations, write MYsql tasks, this problem will occur after a long time,
while processing 5 topics, each task 3 partitions, there are about 4 data per
second , each data size is about 5MB, the startup script:
Spark-submit \
--class ....... \
--driver-memory 3g \
--executor-memory 2g \
--executor-cores 6 \
--num-executors 3 \
.......
\\ 中文:Chinese
我们使用的是 Structured
Streaming,主要是做一些json解析计算,写入MYsql的任务,时间长了就会出现这个问题,同时处理5个topic,每个任务3个分区,每秒大概有4条数据,每条数据大小5MB左右,启动脚本:
spark-submit \
--class ....... \
--driver-memory 3g \
--executor-memory 2g \
--executor-cores 6 \
--num-executors 3 \
.......
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]