1. For what reasons is using Spark the above ports? What internal component is triggering them? -Akka(guessing from the error log) is used to schedule tasks and to notify executors - the ports used are random by default 2. How I can get rid of these errors? - Probably the ports are not open on your server.You can set certain ports and open them using spark.driver.port and spark.executor.port. Or you can open all ports between the masters and slaves. for a cluster on ec2, the ec2 script takes care of the required.
3. Why the application is still finished with success? - DO you have more worker in the cluster which are able to connect. 4. Why is trying with more ports? - Not sure, Its picking the ports randomly. On Thu, Feb 5, 2015 at 2:30 PM, Spico Florin <spicoflo...@gmail.com> wrote: > Hello! > I received the following errors in the workerLog.log files: > > ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker@stream4:33660] > -> [akka.tcp://sparkExecutor@stream4:47929]: Error [Association failed > with [akka.tcp://sparkExecutor@stream4:47929]] [ > akka.remote.EndpointAssociationException: Association failed with > [akka.tcp://sparkExecutor@stream4:47929] > Caused by: > akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: > Connection refused: stream4/x.x.x.x:47929 > ] > (For security reason have masked the IP with x.x.x.x). The same errors > occurs for different ports > (42395,39761). > Even though I have these errors the application is finished with success. > I have the following questions: > 1. For what reasons is using Spark the above ports? What internal > component is triggering them? > 2. How I can get rid of these errors? > 3. Why the application is still finished with success? > 4. Why is trying with more ports? > > I look forward for your answers. > Regards. > Florin > > > -- [image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com> Arush Kharbanda || Technical Teamlead ar...@sigmoidanalytics.com || www.sigmoidanalytics.com