On 09/25/2015 04:34 PM, Raúl Gutiérrez Segalés wrote:
On Sep 25, 2015 9:08 AM, "Marco Massenzio" > wrote:
>
> +1 to what Alex says.
>
> As far as we know, the functionality we use (ephemeral sequential
nodes and writing simple data to a
e too!
>
> [0] https://mesosphere.com/downloads/
>
> Marco Massenzio
> Distributed Systems Engineer
> http://codetrips.com
>
> On Tue, Sep 22, 2015 at 10:54 AM, CCAAT
<cc...@tampabay.rr.com <m
On 09/25/2015 08:13 AM, Marco Massenzio wrote:
Folks:
as a reminder, please be aware that as of Mesos 0.24.0, as announced
back in June, Mesos Master will write its information (`MasterInfo`) to
ZooKeeper in JSON format (see below for details).
What versions of Zookeeper are supported by
On 09/21/2015 03:01 PM, Vinod Kone wrote:
+Jake Farrell
The mesos project doesn't publish platform dependent artifacts. We
currently only publish platform independent artifacts like JAR (to
apache maven) and interface EGG (to PyPI).
Recently we made the decision
Hello,
So I'm working on putting together the mesos-0.24 ebuild for gentoo,
from sources. Since from the tarball,
/usr/portage/distfiles/mesos-0.24.1.tar.gz, is the file pulled down for
mesos. I guess it is actually mesos-0-.24.1. I have it compiling and it
installs these now in /usr/bin/::
t Gentoo
via the arduous 'gentoo handbook'. I'd strongly suggest you endure
that pain, to become functionally literate with Gentoo. Several folks
are working on rapid install semantics for Gentoo on a myriad for
hardware architectures.
wwr,
James
Cheers!
On 20/09/2015 4:35 AM, CCAAT wrote:
/2015 11:09 PM, F21 wrote:
That sounds really interesting! I am just in the process of spinning up
a gentoo vm.
Would you mind sharing your ebuild for mesos-0.22.0 via a gist on Github?
On 18/09/2015 12:58 PM, CCAAT wrote:
On 09/17/2015 06:33 PM, F21 wrote:
Is there anyway to build portable
On 09/18/2015 01:33 PM, Vinod Kone wrote:
On Fri, Sep 18, 2015 at 11:31 AM, craig w > wrote:
Gotcha will there be a blog post / release announcement on the
website soon?
yea i'll get to it. sorry for the delay.
I'm confused. Here at
Oh,
Here is a link that explains the Variable meanings for the packages
downloaded by gentoo's package manager, portage::
https://devmanual.gentoo.org/ebuild-writing/variables/
I really am at the stage that I want/need to test many tarball releases
and also to start testing on other
on
a rapid install semantic for gentoo
Tomorrow.
James
On 09/17/2015 11:09 PM, F21 wrote:
That sounds really interesting! I am just in the process of spinning up
a gentoo vm.
Would you mind sharing your ebuild for mesos-0.22.0 via a gist on Github?
On 18/09/2015 12:58 PM, CCAAT wrote:
On 09
@ Vinod:: An excellent idea as the code bases mature. It will force
clear delineation of functionality and allow those 'other language"
experts to define their codes for Mesos more clearly.
@ Artem:: Another excellent point. The mesos "core team" will have to
still work with the other
THANKS, as I have not kept up on the spark lists
James
On 08/25/2015 04:28 AM, Iulian Dragoș wrote:
On Mon, Aug 24, 2015 at 7:16 PM, CCAAT cc...@tampabay.rr.com
mailto:cc...@tampabay.rr.com wrote:
On 08/24/2015 05:33 AM, Iulian Dragoș wrote:
Hello Iulian,
Ok, so I
Hello,
Looking here:: [1] It seems we have a very aggressive (tentative)
release schedule for mesos-1.0 ?
Anyone care to approximate (WAG wild_ax_guess) a date for mesos-1.0?
Or will there be other versions after 0.25.0 of mesos?
mesos-0.24 just shows one bug (51/52) as unresolved.
if the
detail is lacking. Still, your link is better
Thanks,
James
On Wed, Aug 5, 2015 at 9:41 AM, CCAAT cc...@tampabay.rr.com
mailto:cc...@tampabay.rr.com wrote:
Hello,
Looking here:: [1] It seems we have a very aggressive (tentative)
release schedule for mesos-1.0
I'd be most curious to see a working example of this idea, prefixes
and all for sleeping (long term sleeping) nodes (slave and masters).
Anybody, do post what you have/are doing on this taskid resuse and
reservations experimentations. Probably many are interested for a
variety of reasons
--attributes=cluster:01z99;
os:ubuntu-14-04; jdk:8 or whatever makes sense.
/Marco Massenzio/
/Distributed Systems Engineer/
On Tue, Jul 7, 2015 at 8:55 AM, CCAAT cc...@tampabay.rr.com
mailto:cc...@tampabay.rr.com wrote:
Hello team_mesos,
Is there any reason one set of (3) masters cannot
Hello team_mesos,
Is there any reason one set of (3) masters cannot talk to and manage
several (many) different slave clusters of (3)? These slave clusters
would be different arch, different mixes of resources and be running
different frameworks, but all share/use the same (3) masters.
Ideas
On 07/03/2015 12:30 PM, Tim Chen wrote:
Hi Pradeep,
Without any more information it's quite impossible to know what's going on.
What's in the slave logs and storm framework logs?
Tim
On Fri, Jul 3, 2015 at 10:06 AM, Pradeep Chhetri
pradeep.chhetr...@gmail.com mailto:pradeep.chhetr...@gmail.com
On 07/02/2015 12:10 PM, Carlos Torres wrote:
From: CCAAT cc...@tampabay.rr.com
Sent: Thursday, July 2, 2015 12:00 PM
To: user@mesos.apache.org
Cc: cc...@tampabay.rr.com
Subject: COMMERCIAL:Re: [Question] Distributed Load Testing with Mesos and
Gatling
On 07/01/2015 01:17 PM, Carlos Torres wrote:
Hi all,
In the past weeks, I've been thinking in leveraging Mesos to schedule
distributed load tests.
An excellent idea.
One problem, at least for me, with this approach is that the load testing tool
needs to coordinate
the distributed
tuning Cephfs and btrfs.
James
I'll point out your comments and more details of our plan in our README.md
Thanks!
On Mon, Jun 29, 2015 at 2:31 AM, CCAAT cc...@tampabay.rr.com
mailto:cc...@tampabay.rr.com wrote:
Hello Zhongyue Luo,
Well this is very interesting.
Are you now
Hello Zhongyue Luo,
Well this is very interesting.
Are you now, or intend to replace HDFS with cephfs?
That is cephfs is the distributed file sustem upon which
mesos and the frameworks run?
Please clarify exactly what your plans are and the architecture and
platforms you intend to support.
On 06/19/2015 01:45 PM, Dave Martens wrote:
Thanks for all of these comments - I had similar questions.
What is the minimum RAM for a master or a slave? I have heard that the
Mesos slave software adds 1GB of RAM on top of what the slave's workload
processing will require. I have read that 8GB
Very Interesting projects there Steven Borrelli!
So, I've been working on structuring (3) classes of nodes for general
deployment among multiple different cluster/cloud offerings.
(1) The traditional 'slave-node' that is 100% controlled by the
cluster/cloud master. Classical workload service
+1 master/slave, no change needed. is the same as
master/slaveI.E. keep the nomenclature as it currently is
This means keep the name 'master' and keep the name 'slave'.
Are you applying fuzzy math or kalman filters to your summations below?
It looks to me, tallying things up, Master is
On 06/05/2015 10:09 AM, Alex Gaudio wrote:
Hi @Ankur,
Next, we built Relay https://github.com/sailthru/relay and, the Mesos
extension, Relay.Mesos https://github.com/sailthru/relay.mesos, to
convert our small scripts into long-running instances we could then put
on Marathon.
On 06/01/2015 04:18 PM, Adam Bordelon wrote:
There has been much discussion about finding a less offensive name than
Slave, and many of these thoughts have been captured in
https://issues.apache.org/jira/browse/MESOS-1478
I find political correctness rather nausiating. Folks should stop
trying
On 06/02/2015 11:30 AM, Alexander Gallego wrote:
1. mesos-worker
2. mesos-worker
Currently, my (limited) understanding of the codebase for mesos, is that
slave does not have any autonomy, is 100% controlled by the Master,
hence the clear nomenclature of Master-Slave. If we are to migrate to
On 05/02/2015 02:17 PM, Tim Chen wrote:
Hi Arunabha,
Which linux distro/version are you using?
A quick search on google finds some settings that might be required to
turn on memsw.limit_in_bytes options for cgroups:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1348688
Tim
Maybe the
On 04/28/2015 11:54 AM, Dick Davies wrote:
Thanks Ian.
Digging around the cgroup there are 3 processes in there;
* the mesos-executor
* the shell script marathon starts the app with
* the actual command to run the task ( a perl app in this case)
We've been having discussions about various
Hello one and all,
I'm not voting here, my reasons should be ridiculously clear.
I only want to point out that WE, the mesos community, should be
planning to move to gcc-5.x, asap. Why? Excellent question:
[1] https://gcc.gnu.org/wiki/OpenACC
[2] https://gcc.gnu.org/gcc-5/changes.html#offload
think it's going to be years of collaboration, with codes, patches and
profiles being shared, to tame the beast. However, that said, I would
certainly be happy if I'm wrong and look forward to those ideas to
simplify this problem's solution.
James
Tim
On Apr 11, 2015, at 1:05 PM, CCAAT cc
On 04/01/2015 11:20 AM, Christos Kozyrakis wrote:
Service discovery is a topic where it's unlikely that a single solution
will satisfy every need and every constraint. It's also good for the
Mesos community to have multiple successful alternatives, even when they
overlap in some ways.
I will
(+1 :: irrelevant?)
It (mesos-0.22.0) compiles on gentoo with:
x86_64-pc-linux-gnu-4.8.3 *
I'll be putting up the ebuild on bugs.gentoo.org, tonight.
hth,
James
On 03/25/2015 10:23 AM, Till Toenshoff wrote:
+1 binding - make check tested on:
- OSX 10.10.3 + gcc 4.9.2
- OSX 10.10.3 +
fast compile times of large codes.
Thanks,
James
On 03/23/2015 10:22 PM, Adam Bordelon wrote:
I know it's over a year old and hasn't been updated, but bmahler already
created a distcc framework example for Mesos.
https://github.com/mesos/mesos-distcc
On Mon, Mar 23, 2015 at 7:56 PM, CCAAT cc
On 03/23/2015 09:02 PM, Adam Bordelon wrote:
Integration tests are definitely desired/recommended. Some of us devs
just do make [dist]check, but others test integrations with their
favourite frameworks, or push it to their internal testing clusters.
We're open to any additional testing you
Hello,
Best (gu)estimates on when Mesos-0.22.x will be released?
James
On 02/22/2015 06:35 AM, i...@roybos.nl wrote:
Last friday i put some Ansible scripts on github for provisoning a multi
AZ cluster on AWS.
You could have a look at it
https://github.com/roybos/aws-mesos-marathon-cluster and maybe it helps you.
It basically creates a VPC within an AWS region and
On 02/04/2015 06:00 PM, Pradeep Kiruvale wrote:
In a data center, if there are thousands of heterogeneous nodes
(x86,arm,gpu,fpgas) then is the mesos can really allocate a co-located
resources for any incoming application to finish the task faster?
Thanks Regards,
Pradeep
Hello Pradeep,
On 01/21/2015 11:10 PM, Shuai Lin wrote:
OK, I'll take a look at the debian package.
thanks,
James
You can always write the init wrapper scripts for marathon. There is an
official debian package, which you can find in mesos's apt repo.
On Thu, Jan 22, 2015 at 4:20 AM, CCAAT cc
Hello all,
I was reading about Marathon: Marathon scheduler processes were started
outside of Mesos using init, upstart, or a similar tool [1]
So my related questions are
Does Marathon work with mesos + Openrc as the init system?
Are there any other frameworks that work with Mesos + Openrc?
On 01/18/2015 04:25 PM, Ranjib Dey wrote:
you are right, OS is same , which is Linux kernerl, But the
Ubuntu/CoreOS/Redhat etc distinction are in userspace (i.e tools other
than the kernel), and hence you can have coreos running ubuntu/redhat
containers.
CoreOS is a gentoo knock_off [1,2.3]
they
assume most
users are migrating from an existing Hadoop deployment, so HDFS is
sort of assumed.
On 20 October 2014 23:18, CCAAT cc...@tampabay.rr.com wrote:
On 10/20/14 11:46, Steven Schlansker wrote:
We are running Mesos entirely without HDFS with no problems. We use
Docker to distribute our
On 10/13/14 00:36, Vinod Kone wrote:
No. It wasn't.
I'm no systemd expert, but I do not think you can implement this
if your linux distro is running systemd? If it can be, I sure like
some information on just how the scheme works. A white paper or
well defined pseudo code?
On Sun, Oct 12,
H,
Possible solution? Attach a computer with multiple ethernet cards.
One is used to interface to the slave via the single port. ON the
attached computer (basically a secure router) you run Network Address
Translation) [1] and other codes to make the multiple interfaces
available on
On 10/07/14 06:50, Stephan Erb wrote:
Seems like there is a workaround: I can emulate my desired configuration
to prevent swap usage, by disabling swap on the host and starting the
slave without --cgroups_limit_swap. Then everything works as expected,
i.e., a misbehaving task is killed
On 10/07/14 06:50, Stephan Erb wrote:
Seems like there is a workaround: I can emulate my desired configuration
to prevent swap usage, by disabling swap on the host and starting the
slave without --cgroups_limit_swap. Then everything works as expected,
i.e., a misbehaving task is killed
Hello,
Is there an archive for this list?
Tia,
James
Hello one and all,
From my research, the most significant point to using mesos,
is to use container in lieu of a VM configuration [1].
I'd be curious as to informative points that illuminate this
issue. I guess the main point is that for mesos to be all it can be
were talking about containers on
On 09/26/14 06:20, Stephan Erb wrote:
Hi everyone,
I am having issues with the cgroups isolation of Mesos. It seems like
tasks are prevented from allocating more memory than their limit.
However, they are never killed.
I am running Aurora and Mesos 0.20.1 using the cgroups isolation on
On 09/25/14 10:33, John Mickey wrote:
The default is posix/cpu,posix/mem
Any ideas why it is still trying to use cgroups?
Perhaps this short posting my help a bit?
http://blog.jorgenschaefer.de/2014/07/why-systemd.html
Short answer, systemd is controlling cgroups now, and it is
a huge,
On 09/24/14 13:23, John Mickey wrote:
Thank you for the responses.
I replaced OpenJDK with Oracle JDK and was able to build successfully.
During make check, I received the following error:
F0924 18:12:05.325278 13960 isolator_tests.cpp:136]
CHECK_SOME(isolator): Failed to create isolator:
Hello Brenden/Vinod,
Is your installation using systemd ?
Has anyone documented systemd configurations/issues for the various
linux distro running mesos/spark?
What if a cluster is running on a mixture of systems that use/do_not_use
systemd; are there any issues, related to systemd and
Hello,
I've just created ebuild for mesos-0.20.0 for gentoo. Gentoo's ebuilds
handle the build and runtime setting for software packages on gentoo.
I need to test the new mesos builds to ensure all of the compile time
dependencies are correct and that each runtime dependency option works
On 09/07/14 23:39, Vinod Kone wrote:
Hi James,
Great to see a Gentoo package for Mesos!
Regarding HDFS requirement, any shared storage (even just a http/ftp
server works) that the Mesos slaves can pull the executor from is enough.
Hello Vinod,
I'm looking for more specific advise on not
On 09/08/14 02:55, Tomas Barton wrote:
Spark has support for HDFS, however you don't have to use it and there's
no need to install whole Hadoop stack. I've tested Mesos and Spark with
FhGFS distributed filesystem and it works just fine.
Yes, from what I have read, since this is a new effort,
Hello Mesos,
I have hacked together an ebuild (gentoo package) to install
mesos-0.20.0. It seems to be working, but I need some generic guidelines to
fully test the mesos package.
I also intend to install it on a small cluster of gentoo machines. Do I
need a distributed file system, such as
57 matches
Mail list logo