We tested Spark 1.2 and 1.3 , and this issue is gone. I know starting from
1.2, Spark uses netty instead of nio.
So you mean that bypass this issue?

Another question is , why this error message did not show in Spark 0.9 or
older version?

On Tue, Aug 4, 2015 at 11:01 PM, Aaron Davidson <ilike...@gmail.com> wrote:

> ConnectionManager has been deprecated and is no longer used by default
> (NettyBlockTransferService is the replacement). Hopefully you would no
> longer see these messages unless you have explicitly flipped it back on.
>
> On Tue, Aug 4, 2015 at 6:14 PM, Jim Green <openkbi...@gmail.com> wrote:
>
>> And also https://issues.apache.org/jira/browse/SPARK-3106
>> This one is still open.
>>
>> On Tue, Aug 4, 2015 at 6:12 PM, Jim Green <openkbi...@gmail.com> wrote:
>>
>>> *Symotom:*
>>> Even sample job fails:
>>> $ MASTER=spark://xxx:7077 run-example org.apache.spark.examples.SparkPi
>>> 10
>>> Pi is roughly 3.140636
>>> ERROR ConnectionManager: Corresponding SendingConnection to
>>> ConnectionManagerId(xxx,xxxx) not found
>>> WARN ConnectionManager: All connections not cleaned up
>>>
>>> Found https://issues.apache.org/jira/browse/SPARK-3322
>>> But the code changes are not in newer version os Spark, however this
>>> jira is marked as fixed.
>>> Is this issue really fixed in latest version? If so, what is the related
>>> JIRA?
>>>
>>> --
>>> Thanks,
>>> www.openkb.info
>>> (Open KnowledgeBase for Hadoop/Database/OS/Network/Tool)
>>>
>>
>>
>>
>> --
>> Thanks,
>> www.openkb.info
>> (Open KnowledgeBase for Hadoop/Database/OS/Network/Tool)
>>
>
>


-- 
Thanks,
www.openkb.info
(Open KnowledgeBase for Hadoop/Database/OS/Network/Tool)

Reply via email to