Hi SM,
Apologize for delayed response.
No, the issue is with Spark 1.2.0. There is a bug in Spark 1.2.0.
Recently Spark have latest 1.3.0 release so it might have fixed in it.
I am not planning to test it soon, may be after some time.
You can try for it.
Regards,
Shailesh
--
View this messa
Thanks. But after setting "spark.shuffle.blockTransferService" to "nio"
application fails with Akka Client disassociation.
15/01/27 13:38:11 ERROR TaskSchedulerImpl: Lost executor 3 on
wynchcs218.wyn.cnw.co.nz: remote Akka client disassociated
15/01/27 13:38:11 INFO TaskSetManager: Re-queueing tas
This was a regression caused by Netty Block Transfer Service. The fix for
this just barely missed the 1.2 release, and you can see the associated
JIRA here: https://issues.apache.org/jira/browse/SPARK-4837
Current master has the fix, and the Spark 1.2.1 release will have it
included. If you don't
Can anyone please let me know ?
I don't want to open all ports on n/w. So, am interested in the property by
which this new port I can configure.
Shailesh
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-2-How-to-change-Default-Random-port-tp21306p2
Hello,
Recently, I have upgraded my setup to Spark 1.2 from Spark 1.1.
I have 4 node Ubuntu Spark Cluster.
With Spark 1.1, I used to write Spark Scala program in Eclipse on my Windows
development host and submit the job on Ubuntu Cluster, from Eclipse (Windows
machine).
As on my network not all