On 24/06/2015 15:35, David Greenberg wrote:
I'm not aware of any minimum offer size option in mesos. What I've
seen success with is holding onto small offers and waiting until I
accumulate enough to launch the large task. This way, the need for
large offers doesn't affect the cluster, but the
The following problem is mentioned in the Mesos technical paper at
http://mesos.berkeley.edu/mesos_tech_report.pdf
when a cluster is filled by tasks with small resource requirements, a
framework f with large resource requirements may starve, because
whenever a small task finishes, f cannot
On 24/06/2015 16:31, Alex Gaudio wrote:
Does anyone have other ideas?
HTCondor deals with this by having a defrag demon, which periodically
stops hosts accepting small jobs, so that it can coalesce small slots
into larger ones.
On 19/06/2015 01:59, Benjamin Mahler wrote:
100ms is the default period for quota:
https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt
Ah, that's very interesting: thank you.
Now if I understand this correctly, assuming Mesos runs all its tasks in
cgroups with CPU bandwidth
On 19/06/2015 18:38, Oliver Nicholas wrote:
Unless you have some true HA requirements, it seems intuitively
wasteful to have 3 masters and 2 slaves (unless the cost of 5 nodes is
inconsequential to you and you hate the environment).
Any particular reason not to have three nodes which are acting
On 18/06/2015 06:41, zhou weitao wrote:
A partial solution might be if each task could request a fraction
of a CPU.
But CPU fraction is Time-Division. I don't think it's possible to
request a fraction.
Because it's timeslicing, there should be no problem at all requesting
fractions.
Looking for Mesos .deb packages, on Google I find links to
http://mesosphere.io/downloads/
http://elastic.mesosphere.io/
but these are giving 503 Service Unavailable errors.
Is there a problem, or have these sites gone / migrated away?
to free me from having to
pre-calculate resource requirements for tasks, but if it were able to
respond to *actual* RAM usage on the system, allow for some
overcommitment, and terminate lower-priority tasks if RAM is exhausted,
that would be extremely helpful.
Thanks,
Brian Candler.
I see in the upcoming 0.23.0 there is support for oversubscription of
resources: the objective is to fill your cluster with background jobs
while still ensuring critical jobs have the resources they need. This is
great.
In order to implement this properly, it seems to me that the executor is
Are there any open-source job queue/batch systems which run under Mesos?
I am thinking of things like HTCondor, Torque etc.
The requirement is to be able to:
- define an overall job as a set of sub-tasks (could be many thousands)
- put sub-tasks into a queue; execute tasks from the queue
-
On 07/10/2015 09:44, Nikolaos Ballas neXus wrote:
Maybe you need to read a bit :)
I have read plenty, including those you list, and I didn't find anything
which met my requirements. Again I apologise if I was not clear in my
question.
Spark has a very specific data model (RDDs) and
On 07/10/2015 09:01, Nikolaos Ballas neXus wrote:
Check for Marathon
I don't see how Marathon does what I want. Maybe I wasn't clear enough
in explaining my requirements.
What I need is basically a supercomputer cluster where I can take a
large computation job, break it into lots of
On 07/10/2015 11:08, Pablo Cingolani wrote:
It looks like you are looking for something like BDS
http://pcingola.github.io/BigDataScript/
It has the additional advantage that you can port your scripts seamlessly
between Mesos and other cluster systems (SGE, PBS, Torque, etc.).
Yes, that looks
On 08/10/2015 19:04, Kapil Arya wrote:
I don't know all the details, but I guess, depending upon the exact
interface, it might be possible to have a C++ wrapper around a non-C++
containerizer module. In a very naive approach, one simply fork/execs
the "external" non-C++ containerizer
ork?")
However, some pages are OK, e.g.
http://mesos.apache.org/documentation/latest/mesos-architecture/
Could someone investigate please?
Thanks,
Brian Candler.
15 matches
Mail list logo