Hello Markus,

I had a similar question 
(http://mail-archives.apache.org/mod_mbox/incubator-spark-user/201310.mbox/%3CC6960B654043804182DF9FD46E83E6DF13F8E6CE%40CWYIGMBCRP01.Corp.Acxiom.net%3E
 ) a few days ago. You can exclude small memory footprint mesos nodes by 
specifying the executor memory as being pretty high. But I agree with what 
you're trying to do. Being able to handle heterogeneous clusters would be a 
very handy feature to add to Spark. Ex: smart job creation per mesos node 
appropriate for that node's resources. 

Regards,
Charles


  

-----Original Message-----
From: Markus Losoi [mailto:[email protected]] 
Sent: Monday, October 07, 2013 11:32 PM
To: [email protected]
Subject: Spark in a heterogeneous computing environment

Hi

Is it currently possible to define in Spark that some worker node should be 
preferred to the other worker nodes? That is, in a heterogeneous computing 
environment some computing units can be more powerful than the others and 
assigning computing jobs to them should be prioritized.

Best regards,
Markus Losoi ([email protected])

***************************************************************************
The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be legally
privileged.

If the reader of this message is not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank You.
****************************************************************************

Reply via email to