Also note that you're on a problematic Marathon version.
I was thinking to upgrade from 0.23&0.10.1 to latest versions, but decided
to wait after I saw an announcement on the marathon users list that 0.11 is
not recommended for production from last week (Oct 9).
I'm waiting until 0.11.1 is out...
for docker only scales to one host. Can
someone confirm if it has worked for multiple slaves?
What is the most common engine everyone uses for load balancing an app
with multiple tasks/docker containers?
Shafay Latif
On Aug 3, 2015, at 9:44 AM, Itamar Ostricher ita...@yowza3d.com wrote:
Thanks!
I
I use marathon to launch a nginx-docker-container named my-app, and set
up Mesos-DNS, such that my-app.marathon.mesos returns the IP of the slave
running the container (e.g. 10.20.30.40).
Now, my-app is running on some dynamically-allocated port (e.g. 31001),
but I would like
Hi,
I just set up mesos-dns with my mesos+marathon cluster, and it appears to
be working fine, but I can't get SRV records.
mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns
-config /usr/local/mesos-dns/config.json
verified to be working by running dig from another machine
/mesos-dns/docs/naming.html#srv-records
Andras
*From:* Itamar Ostricher [mailto:ita...@yowza3d.com]
*Sent:* Tuesday, July 28, 2015 12:15 PM
*To:* user@mesos.apache.org
*Subject:* Can't get SRV records from Mesos-DNS
Hi,
I just set up mesos-dns with my mesos+marathon cluster
Hi,
We have a production pipeline running a series of jobs, with each job
creating a custom mesos framework to execute all tasks related to that job.
Both scheduler and executor are written using the Python mesos API.
Here's a snippet (modified for brevity) of the scheduler code:
class
reviveOffers. Would reviveOffers change the situation?
On Tue, Apr 7, 2015 at 1:50 AM Itamar Ostricher ita...@yowza3d.com
wrote:
Got it. Thanks!
On Mon, Apr 6, 2015 at 9:02 PM, Vinod Kone vinodk...@apache.org wrote:
To clarify David's answer, you should only get 16, 8 offer until the
filter
be consolidated.
On Mon, Apr 6, 2015 at 7:29 AM Itamar Ostricher ita...@yowza3d.com
wrote:
Say my scheduler received a resource offers from slave S with 16cpu,
16GiB mem, and called launchTasks on this offer with utilization of
16cpu, 8GiB mem.
From what I see (with mesos 0.21), the left over
offer. You don't need to restart the master for the aggregation.
On Mon, Apr 6, 2015 at 10:46 AM, Itamar Ostricher ita...@yowza3d.com
wrote:
Thanks David!
I'd like to make sure I understand you correctly.
I will get both 16,8 0,8 offers, or just the 16,8 offer? (because
I previously
Thanks Michael!
On Mon, Mar 23, 2015 at 7:59 AM, Michael Park mcyp...@gmail.com wrote:
Hi Itamar,
Thanks for the patch! It looks like Niklas and Jie has looked at the patch
and I'm sure they'll commit it soon, if not I'll nudge them :)
Great :-)
2. I would imagine there could be a
haven't found the existing method to be limiting in
performance/latency for our needs at this time.
On Thu, Mar 19, 2015 at 8:19 AM, Itamar Ostricher ita...@yowza3d.com
wrote:
Hi,
According to the Python interface docstring
https://github.com/apache/mesos/blob/master/src/python/interface
Making sure the question is clear:
I'm implementing a framework scheduler,
and I want to know if the resourceOffers method can be invoked while a
previous invocation hasn't returned yet (on another thread).
Thanks,
- Itamar.
know. Your
executor can implement it, and that may be one simple way to do it. That
could also be a good way to implement shell's rlimit*, in general.
On Wed, Jan 21, 2015 at 1:22 AM, Itamar Ostricher ita...@yowza3d.com
wrote:
I'm using a custom internal framework, loosely based
I'm using a custom internal framework, loosely based on MesosSubmit.
The phenomenon I'm seeing is something like this:
1. Task X is assigned to slave S.
2. I know this task should run for ~10minutes.
3. On the master dashboard, I see that task X is in the Running state for
several *hours*.
4. I
to know how you end up doing it!
--
Tom Arnfeld
Developer // DueDil
On Thursday, Jan 8, 2015 at 7:32 am, Itamar Ostricher ita...@yowza3d.com,
wrote:
Thanks everybody for all your insights!
I totally agree with the last response from Tom.
The per-node services definitely belong to the level
).
Tomas
On 6 January 2015 at 10:12, Itamar Ostricher ita...@yowza3d.com wrote:
Are there recommendations regarding master / scheduler machines resources
as function of cluster size?
Say I have a cluster with hundreds of slave machines and thousands of
CPUs, with a single framework
Are there recommendations regarding master / scheduler machines resources
as function of cluster size?
Say I have a cluster with hundreds of slave machines and thousands of CPUs,
with a single framework that will schedule millions of tasks.
How does the strength of the master scheduler machines
Hi,
I experimented today running mesos masters slaves with multiple masters
using zookeeper, by editing the /etc/mesos/zk file on all nodes (masters
and slaves) to something like:
zk://master1:2181,master2:2181,master3:2181/mesos
I noticed that if not all masters are up when a master or slave
in a dynamic manner depending on the
data
What is the biggest bottleneck you have? disk read/write, network, CPU,
memory?
Writing own framework is possible, if you can take advantage of some
problem specific property.
On 24 July 2014 07:34, Itamar Ostricher ita...@yowza3d.com wrote:
many: we
Hi,
I'm trying to do a clean build of mesos for the 0.19.0 tarball.
I was following the instructions from
http://mesos.apache.org/gettingstarted/ step by step. Got to running
`make`, which ran for quite a while, and exited with errors (see the end of
the output below).
Extra env info: I'm trying
, Jul 23, 2014 at 11:55 AM, Tomas Barton barton.to...@gmail.com
wrote:
Hi,
that's quite strange. Try to run
ldconfig
and then again make.
You can find binary packages for Debian here:
http://mesosphere.io/downloads/
Tomas
On 23 July 2014 10:09, Itamar Ostricher ita...@yowza3d.com wrote
21 matches
Mail list logo