mentioned, many organizations and individuals have been using this,
so wouldn't it be better to have it developed within Spark community?
Best
Tien Dat
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
---
.
2, Is it possible to set a job with a range of resource, such as: at least
20 CPU cores, at most 30 CPU cores and at least 20GB of mem, at most 40GB?
Thank you in advance.
Best
Tien Dat
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com
Dear Timothy,
It works like a charm now.
BTW (don't judge me if I am to greedy :-)), the latency to start a Spark job
is around 2-4 seconds, unless I am not aware of some awesome optimization on
Spark. Do you know if Spark community is working on reducing this latency?
Best
--
Sent from: http
Thank you for your answer.
The think it I actually pointed to a local binary file. And Mesos locally
copied the binary file to a specific folder in /var/lib/mesos/... and
extract it to every time it launched an Spark executor. With the fetch
cache, the copy time is reduced, but the reduction is no
Dear all,
We are running Spark with Mesos as the master for resource management.
In our cluster, there are jobs that require very short response time (near
real time applications), which usually around 3-5 seconds.
In order to Spark to execute with Mesos, one has to specify the
SPARK_EXECUTOR_URI