Hi Matei,
We have an analytics team that uses the cluster on a daily basis. They use
two types of 'run modes':
1) For running actual queries, they set the spark.executor.memory to
something between 4 and 8GB of RAM/worker.
2) A shell that takes a minimal amount of memory on workers (128MB) for
Hey Gary, just as a workaround, note that you can use Mesos in coarse-grained
mode by setting spark.mesos.coarse=true. Then it will hold onto CPUs for the
duration of the job.
Matei
On August 23, 2014 at 7:57:30 AM, Gary Malouf (malouf.g...@gmail.com) wrote:
I just wanted to bring up a signifi
Sure, it's really a good idea to have a CONTRIBUTING.md file with details
on how to contribute e.g cloning,branching,changes , commit along with
corresponding git commands. That way someone who wants to contribute will
surely get the benefit of a quick short documentation on contribution.
Maisnam
I just wanted to bring up a significant Mesos/Spark issue that makes the
combo difficult to use for teams larger than 4-5 people. It's covered in
https://issues.apache.org/jira/browse/MESOS-1688. My understanding is that
Spark's use of executors in fine-grained mode is a very different behavior
t
That sounds like a good idea.
Continuing along those lines, what do people think of moving the
contributing page entirely from the wiki to GitHub? It feels like the right
place for it since GitHub is where we take contributions, and it also lets
people make improvements to it.
Nick
2014년 8월 23일
Can I ask a related question, since I have a PR open to touch up
README.md as we speak (SPARK-3069)?
If this text is in a file called CONTRIBUTING.md, then it will cause a
link to appear on the pull request screen, inviting people to review
the contribution guidelines:
https://github.com/blog/118