Hi All,
We are running spark 2.1.1 on Hadoop YARN 2.6.5.
We found the pyspark.daemon process consume more than 300GB memory.
However, according to
https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals, the
daemon process shouldn't have this problem.
Also, we find the daemon
ion. Spark's messages use this interface.
> See org.apache.spark.network.protocol.MessageWithHeader.
>
> On Tue, Jun 13, 2017 at 4:17 AM, Niu Zhaojie <nzjem...@gmail.com> wrote:
>
>> Hi All:
>>
>> I am trying to control the network read/write speed with
&
Hi All:
I am trying to control the network read/write speed with
ChannelTrafficShapingHandler provided by Netty.
In TransportContext.java
I modify it as below:
public TransportChannelHandler initializePipeline(
SocketChannel channel,
RpcHandler channelRpcHandler) {
try {