Hmm, interesting. I'm using standalone mode but I could consider YARN. I'll
have to simmer on that one. Thanks as always, Sean!
On Wed, Sep 17, 2014 at 12:40 AM, Sean Owen wrote:
> I thought I answered this ... you can easily accomplish this with YARN
> by just telling YARN how much memory / CPU
I thought I answered this ... you can easily accomplish this with YARN
by just telling YARN how much memory / CPU each machine has. This can
be configured in groups too rather than per machine. I don't think you
actually want differently-sized executors, and so don't need ratios.
But you can have d
I'm supposing that there's no good solution to having heterogenous hardware
in a cluster. What are the prospects of having something like this in the
future? Am I missing an architectural detail that precludes this
possibility?
Thanks,
Victor
On Fri, Sep 12, 2014 at 12:10 PM, Victor Tso-Guillen
Ping...
On Thu, Sep 11, 2014 at 5:44 PM, Victor Tso-Guillen wrote:
> So I have a bunch of hardware with different core and memory setups. Is
> there a way to do one of the following:
>
> 1. Express a ratio of cores to memory to retain. The spark worker config
> would represent all of the cores a
So I have a bunch of hardware with different core and memory setups. Is
there a way to do one of the following:
1. Express a ratio of cores to memory to retain. The spark worker config
would represent all of the cores and all of the memory usable for any
application, and the application would take