Actually if you only have one machine, just use the Spark local mode.

Just download the Spark tarball, untar it, set master to local[N], where N
= number of cores. You are good to go. There is no setup of job tracker or
Hadoop.


On Mon, Apr 20, 2015 at 3:21 PM, haihar nahak <harihar1...@gmail.com> wrote:

> Thank you :)
>
> On Mon, Apr 20, 2015 at 4:46 PM, Jörn Franke <jornfra...@gmail.com> wrote:
>
>> Hi, If you have just one physical machine then I would try out Docker
>> instead of a full VM (would be waste of memory and CPU).
>>
>> Best regards
>> Le 20 avr. 2015 00:11, "hnahak" <harihar1...@gmail.com> a écrit :
>>
>>> Hi All,
>>>
>>> I've big physical machine with 16 CPUs , 256 GB RAM, 20 TB Hard disk. I
>>> just
>>> need to know what should be the best solution to make a spark cluster?
>>>
>>> If I need to process TBs of data then
>>> 1. Only one machine, which contain driver, executor, job tracker and task
>>> tracker everything.
>>> 2. create 4 VMs and each VM should consist 4 CPUs , 64 GB RAM
>>> 3. create 8 VMs and each VM should consist 2 CPUs , 32 GB RAM each
>>>
>>> please give me your views/suggestions
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-make-a-spark-cluster-tp22563.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>
>
> --
> {{{H2N}}}-----(@:
>

Reply via email to