On Sun, Dec 20, 2015 at 12:00 AM, Dave Lester wrote:
> Hello Mesos Community!
>
> About a week ago I had an email exchange with the Mesos PMC regarding the
> future of MesosCon, the community-driven conference for the Apache Mesos
> community. This is an abbreviated version
> E: o...@magnetic.io
> T: +31653362783
> Skype: olafmol
> www.magnetic.io
> www.vamp.io
>
> On 07 Dec 2015, at 22:30, Arunabha Ghosh <arunabha...@gmail.com> wrote:
>
> Hi Folks,
> We, at Moz have been working for a while on RogerOS, our
> next gen a
Welcome to the community, Oliver.
On Tue, Dec 8, 2015 at 5:49 AM, Olivier Sallou
wrote:
> Hi,
> the GenOuest (http://www.genouest.org) academic lab is now using Mesos
> in production in its core facility to manage scientists computing tasks
> (for bioinformatics)
> To
nges and cherry pick anything tasty looking :)
>
> On 8 December 2015 at 04:03, Arunabha Ghosh <arunabha...@gmail.com> wrote:
> > Thanks, Vinod.
> >
> > I'm not worried too much about scaling the core, you guys have done the
> hard
> > work on that end
back along the way :)
>
> Would you like to be added to the powered by mesos list?
> https://github.com/apache/mesos/blob/master/docs/powered-by-mesos.md
>
> On Mon, Dec 7, 2015 at 1:30 PM, Arunabha Ghosh <arunabha...@gmail.com>
> wrote:
>
>> Hi Folks,
>&g
Hi Folks,
We, at Moz have been working for a while on RogerOS, our next
gen application platform built on top of Mesos. We've reached a point in
the project where we feel it's ready to share with the world :-)
The blog posts introducing RogerOS can be found at
ack upstream, or is it going to be a permanent
> fork?
>
>
> On Monday, December 7, 2015, Arunabha Ghosh <arunabha...@gmail.com> wrote:
>
>> Hi Folks,
>> We, at Moz have been working for a while on RogerOS, our
>> next gen application pl
framework compute platform. Let us know how
> things work for you guys as you scale!
>
> On Mon, Dec 7, 2015 at 7:24 PM, Arunabha Ghosh <arunabha...@gmail.com>
> wrote:
>
>> We're definitely open to merging the changes to Bamboo back upstream if
>> the changes we made p
Hi,
I'm trying to use the cgroups memory isolator, but after setting
--isolation to 'cgroups/cpu,cgroups/mem' I'm getting the following error in
the logs
mesos-slave[12416]: Failed to create a containerizer: Could not create
MesosContainerizer: Could not create isolator cgroups/mem:
Omega was a replacement for the core scheduler, Borg is the clusterOS. Kind
of like a kernel (Omega) and a full fledged os (Borg).
On Thu, Apr 16, 2015 at 9:43 PM, Marco Massenzio ma...@mesosphere.io
wrote:
At Google there are always to do everything: the deprecated one and the
one that's not
+1 to hangouts, but I think hangouts has a limit of max 10 people in the
hangout.
On Mon, Jan 5, 2015 at 4:52 PM, Tom Arnfeld t...@duedil.com wrote:
+1 also! Very interesting to hear what’s being discussed. +1 on the google
hangouts if these meetings are happening in person so we can listen
you're expecting.
On Mon, Dec 15, 2014 at 4:33 PM, Tim Chen t...@mesosphere.io wrote:
Is there anything in the ERROR/WARNING logs?
Tim
On Mon, Dec 15, 2014 at 4:22 PM, Arunabha Ghosh arunabha...@gmail.com
wrote:
Hi,
I've setup a test mesos cluster on a few VM's running locally. I
Hi,
I would like to run Mesos slaves on machines that have multiple disks.
According to the Mesos configuration page
http://mesos.apache.org/documentation/latest/configuration/ I can specify
a work_dir argument to the slaves.
1) Can the work_dir argument contain multiple directories ?
2) Is
Thanks Steven !
On Tue, Oct 7, 2014 at 4:08 PM, Steven Schlansker sschlans...@opentable.com
wrote:
On Oct 7, 2014, at 4:06 PM, Arunabha Ghosh arunabha...@gmail.com wrote:
Hi,
I would like to run Mesos slaves on machines that have multiple
disks. According to the Mesos configuration
14 matches
Mail list logo