This is how it works, I think:

sc.parallelize(..) takes the variable inside the (..) and returns a 
“distributable equivalent” of that variable. That is, an RDD is returned.

This RDD can be worked on by multiple workers threads in _parallel_. The 
parallelize(..) has to be done on the driver running on the Master(and not on 
workers), so that the resulting RDD’s parts can be sent across to the workers 
who have enough resources. Each worker processes a part of the RDD.

So it is the “sc.paralleize (..)  operation is performed in the driver program” 
that enables scalability. Horizontal scalablity.

From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: 26 June 2015 13:33
To: shahab
Cc: user@spark.apache.org
Subject: Re: Performing sc.paralleize (..) in workers not in the driver program

Why do you want to do that?

Thanks
Best Regards

On Thu, Jun 25, 2015 at 10:16 PM, shahab 
<shahab.mok...@gmail.com<mailto:shahab.mok...@gmail.com>> wrote:
Hi,

Apparently, sc.paralleize (..)  operation is performed in the driver program 
not in the workers ! Is it possible to do this in worker process for the sake 
of scalability?

best
/Shahab

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

Reply via email to