Re: Welcome Haosdent Huang as Mesos Committer and PMC member!

2016-12-20 Thread Aaron Carey
Congratulations! Very well deserved! Always so helpful :)

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


On 20 December 2016 at 01:24, Jie Yu <yujie@gmail.com> wrote:

> Congrats! Well deserved!!
>
> Always wondering why you have so much time!
>
> - Jie
>
> On Mon, Dec 19, 2016 at 5:19 PM, Jay Guo <guojiannan1...@gmail.com> wrote:
>
>> Congratulations Haosdent!!!
>>
>> /J
>>
>> On Mon, Dec 19, 2016 at 4:40 PM, Chengwei Yang
>> <chengwei.yang...@gmail.com> wrote:
>> > Congratulations! Well deserved.
>> >
>> > Haosdent helps me a lot!
>> >
>> > On Fri, Dec 16, 2016 at 01:59:19PM -0500, Vinod Kone wrote:
>> >> Hi folks,
>> >>
>> >> Please join me in formally welcoming Haosdent Huang as Mesos Committer
>> and
>> >> PMC member.
>> >>
>> >> Haosdent has been an active contributor to the project for more than a
>> year
>> >> now. He has contributed a number of patches and features to the Mesos
>> code
>> >> base, most notably the unified cgroups isolator and health check
>> >> improvements. The most impressive thing about him is that he always
>> >> volunteers to help out people in the community, be it on slack/IRC or
>> >> mailing lists. The fact that he does all this even though working on
>> Mesos
>> >> is not part of his day job is even more impressive.
>> >>
>> >> Here is his more formal checklist
>> >> <https://docs.google.com/document/d/1wq-M4KoMOJWZTNTN-hvy-
>> H8ZGLXG6CF9VP2IY_UU5_0/edit?ts=57e0029d>
>> >> for your perusal.
>> >>
>> >> Thanks,
>> >> Vinod
>> >>
>> >> P.S: Sorry for the delay in sending the welcome email.
>> >
>> > --
>> > Thanks,
>> > Chengwei
>>
>
>


RE: Support for tasks groups aka pods in Mesos

2016-08-11 Thread Aaron Carey
Hi Vinod,

I'm not sure where this should go in the design doc, but an important feature 
of kubernetes for us is the ability to share mount and network namespaces 
between containers in the pod (so containers can share mounts etc). This 
currently isn't very well supported in docker (I think Kubernetes uses its own 
containeriser to do this?).

We'd be very keen to have this ability in Mesos too.

Thanks,
Aaron



--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Vinod Kone [vinodk...@apache.org]
Sent: 09 August 2016 02:42
To: Vinod Kone
Cc: dev; user
Subject: Re: Support for tasks groups aka pods in Mesos

Sorry, sent the wrong link earlier for design doc.

Design doc: https://issues.apache.org/jira/browse/MESOS-6009

Direct link: 
https://docs.google.com/document/d/1FtcyQkDfGp-bPHTW4pUoqQCgVlPde936bo-IIENO_ho/edit#heading=h.ip4t59nlogfz


RE: Support for tasks groups aka pods in Mesos

2016-08-09 Thread Aaron Carey
Just had a brief look over, this is great, a huge leap forward.. we were 
considering moving to Kubernetes because of the pod support, having this in 
mesos would allow us to do a lot more!


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Vinod Kone [vinodk...@apache.org]
Sent: 09 August 2016 02:42
To: Vinod Kone
Cc: dev; user
Subject: Re: Support for tasks groups aka pods in Mesos

Sorry, sent the wrong link earlier for design doc.

Design doc: https://issues.apache.org/jira/browse/MESOS-6009

Direct link: 
https://docs.google.com/document/d/1FtcyQkDfGp-bPHTW4pUoqQCgVlPde936bo-IIENO_ho/edit#heading=h.ip4t59nlogfz


RE: Fetcher cache: caching even more while an executor is alive

2016-07-05 Thread Aaron Carey
As you're writing the framework, have you looked at reserving persistent 
volumes? I think it might help in your use case:

http://mesos.apache.org/documentation/latest/persistent-volume/

Aaron

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: 上西康太 [ueni...@nautilus-technologies.com]
Sent: 05 July 2016 08:24
To: user@mesos.apache.org
Subject: Fetcher cache: caching even more while an executor is alive

Hi,
I'm developing my own framework - that distributes >100 independent
tasks across the cluster and just run them arbitrarily. My problem is,
each task execution environment is a bit large tarball (2~6GB, mostly
application jar files) and task itself finishes within 1~200 seconds,
while tarball extraction takes like tens of seconds every time.
Extracting the same tarball again and again in all tasks is a wasteful
overhead that cannot be ignored.

Fetcher cache is great, but in my case, fetcher cache isn't even
enough and I want to preserve all files extracted from the tarball
while my executor is alive. If Mesos could cache all files extracted
from the tarball by omitting not only download but extraction, I could
save more time.

In "Fetcher Cache Internals" [1] or in "Fetcher Cache" [2] section in
the official document, such issues or future work is not mentioned -
how do you solve this kind of extraction overhead problem, when you
have rather large resource ?

An option would be setting up an internal docker registry and let
slaves cache the docker image that includes our jar files and save
tarball extraction. But, I want to prevent our system from additional
moving parts as much as I can.

Another option might be let fetcher fetch all jar files independently
in slaves, but I think it feasible, but I don't think it manageable in
production in an easy way.

PS Mesos is great; it is helping us a lot - I want to appreciate all
the efforts by the community. Thank you so much!

[1] http://mesos.apache.org/documentation/latest/fetcher-cache-internals/
[2] http://mesos.apache.org/documentation/latest/fetcher/

Kota UENISHI


RE: Rack awareness support for Mesos

2016-06-14 Thread Aaron Carey
#3 would be very helpful for us. Also related:

https://issues.apache.org/jira/browse/MESOS-3059

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Du, Fan [fan...@intel.com]
Sent: 14 June 2016 07:24
To: user@mesos.apache.org; d...@mesos.apache.org
Cc: Joris Van Remoortere; vinodk...@apache.org
Subject: Re: Rack awareness support for Mesos

Hi everyone

Let me summarize the discussion about Rack awareness in the community so
far. First thanks for all the comments, advices or challenges! :)

#1. Stick with attributes for rack awareness

For compatibility with existing framework, I tend to be ok with using
attributes to convey the rack information, but with the goal to do it
automatically, easy to maintain and with good attributes schema. This
will bring up below question where the controversy starts.

#2. Scripts vs programmatic way

Both can be used to set attributes, I've made my arguments in the Jira
and the Design doc, I'm not gonna to argue more here. But please take a
look discussion at MESOS-3366 before, which allow resources/attributes
discovery.

A module to implement *slaveAttributesDecorator* hook will works like
a charm here in a static way. And need to justify attributes updating.

#3. Allow updating attributes
Several cases need to be covered here:

a). Mesos runs inside VMs or container, where live migration happens, so
rack information need to be updated.

b). LLDP packets are broadcasted by the interval 10s~30s, a vendor
specific implementation, and rack information are usually stored in LLDP
daemon to be queried. Worst cases(nodes fresh reboot, or daemon restart)
would be: Mesos slave have to wait 10s~30s for a valid rack information
before register to master. Allow updating attributes will mitigate this
problem.

c). Framework affinity

Framework X prefers to run on the same nodes with another framwork Y.
For example, it's desirable for Shark or Spark-SQL to reside on the
*worker* node where Alluxio(former Tachyon) to gain more performance
boosting as SPARK-6707 ticket message {tachyon=true;us-east-1=false}

If framework could advertise agent attributes in the ResourcesOffer
process, awesome!


#4. Rearrange agents in a more scalable manner, like per rack basis

Randomly offering agents resource to framework does not improve data
locality, imagine the likelihood of a framework getting resources
underneath the same rack, at the scale of +3 nodes. Moreover time to
randomly shuffle the agents also grows.

How about rearranging the agent in a per rack basis, and a minor change
to the way how resources are allocated will fix this.


I might not see the whole picture here, so comments are welcomed!


On 2016/6/6 17:17, Du, Fan wrote:
> Hi, Mesos folks
>
> I’ve been thinking about Mesos rack awareness support for a while,
>
> it’s a common interest for lots of data center applications to provide
> data locality,
>
> fault tolerance and better task placement. Create MESOS-5545 to track
> the story,
>
> and here is the initial design doc [1] to support rack awareness in Mesos.
>
> Looking forward to hear any comments from end user and other developers,
>
> Thanks!
>
> [1]:
> https://docs.google.com/document/d/1rql_LZSwtQzBPALnk0qCLsmxcT3-zB7X7aJp-H3xxyE/edit?usp=sharing
>


RE: Rack awareness support for Mesos

2016-06-07 Thread Aaron Carey
Would this perhaps make sense as a mesos module which can automatically assigns 
labels to the agents, rather than something in the core itself?

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Du, Fan [fan...@intel.com]
Sent: 07 June 2016 16:16
To: Jörg Schad; user@mesos.apache.org
Subject: Re: Rack awareness support for Mesos

On 2016/6/6 23:48, Jörg Schad wrote:
> Hi,
> thanks for your idea and design doc!
> Just a few thoughts:
> a) The scheduling part would be implemented in a framework scheduler and
> not the Mesos Core, or?

I'm not sure which level of scheduling part do you indicate,
For the "Future" section of proposal?, It's Mesos allocation logic.
And how to use rack information to implement advanced features (fault
tolerance,
data locality) is up to the framework scheduling part.

> b) As mentioned by James, this needs to be very flexible (and not
> necessarily based on network structure),

The proposed network topology detection is modular, to fit into Ethernet,
Infiniband, or other network implementation. And yes, user can statically
configure /etc/mesos/rack_id to manipulate the logical network topology
easily.


>afaik people are using labels
> on the agents to identify different fault domains which can then be
> interpreted by framework scheduler. Maybe it would make sense (instead
> of identifying the network structure) to come up with a common label
> naming scheme which can be understood by all/different frameworks.

I'm not convinced here why still using labels,
Based on what information to label the agents? IMO, cluster operator
still needs something like lldp to find out the network topology,
every cluster operator will need to do it by his own, and it's better
to abstract the logical inside Mesos to provide common interface to
frameworks.

Honestly speaking, I don't follow the argument here for the labels.
The proposal is designed to do it *automatically* to reduce maintenance
effort.

> Looking forward to your thoughts on this!
>
> On Mon, Jun 6, 2016 at 3:27 PM, james <gar...@verizon.net
> <mailto:gar...@verizon.net>> wrote:
>
> Hello,
>
>
> @Stephen::I guess Stephen is bringing up the 'security' aspect of
> who get's access to the information, particularly cluster/cloud
> devops, customers or interlopers?
>
>
> @Fan:: As a consultant, most of my customers either have  or are
> planning hybrid installations, where some codes run on a local
> cluster or using 'the cloud' for dynamic load requirements. I would
> think your proposed scheme needs to be very flexible, both in
> application to a campus or Metropolitan Area Network, if not
> massively distributed around the globe. What about different resouce
> types (racks of arm64, gpu centric hardware, DSPs, FPGA etc etc.
> Hardware diversity bring many
> benefits to the cluster/cloud capabilities.
>
>
> This also begs the quesion of hardware management (boot/config/online)
> of the various hardware, such as is built into coreOS. Are several
> applications going to be supported? Standards track? Just Mesos DC/OS
> centric?
>
>
> TIMING DATA:: This is the main issue I see. Once you start 'vectoring
> in resources' you need to add timing (latency) data to encourage robust
> and diversified use of of this data. For HPC, this could be very
> valuable for rDMA abusive algorithms where memory constrained
> workloads not only need the knowledge of additional nearby memory
> resources, but
> the approximated (based on previous data collected) latency and
> bandwidth constraints to use those additional resources.
>
>
> Great idea. I do like it very much.
>
> hth,
> James
>
>
>
> On 06/06/2016 05:06 AM, Stephen Gran wrote:
>
> Hi,
>
> This looks potentially interesting.  How does it work in a
> public cloud
> deployment scenario?  I assume you would just have to disable this
> feature, or not enable it?
>
> Cheers,
>
> On 06/06/16 10:17, Du, Fan wrote:
>
> Hi, Mesos folks
>
> I’ve been thinking about Mesos rack awareness support for a
> while,
>
> it’s a common interest for lots of data center applications
> to provide
> data locality,
>
> fault tolerance and better task placement. Create MESOS-5545
> to track
> the story,
>
> and here is the initial design doc [1] to support rack
> awareness in Mesos.
>
> Looking forward to hear any comments from end user and other
> developers,
>
> Thanks!
>
> [1]:
> 
> https://docs.google.com/document/d/1rql_LZSwtQzBPALnk0qCLsmxcT3-zB7X7aJp-H3xxyE/edit?usp=sharing
>
>
>
>


RE: distributed file systems

2016-05-11 Thread Aaron Carey
What exactly do you mean by deploying a mesos cluster to run on ceph etc?

Do you mean having a clustered file system mounted via nfs to the hosts which 
contains the mesos binaries?

Or something to do with how jobs are executed?

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: james [gar...@verizon.net]
Sent: 11 May 2016 17:08
To: user@mesos.apache.org
Subject: Re: distributed file systems

Hello Rodrick,

That EFS looks interesting, but I did not find the location for the
source-code/git download?  I do not remember the (linux) kernel hooks
for that Distributed File System, or is completely on top of the systems
codes?

License details and I'm not sure if it's 100% opensource?

Beegfs [A] is partially opensource, but that does not fit what is needed
for experimentation. A robust community around open sources and tools,
such as github, should have been mentioned. Equally important
is a community keen on sharing and supporting other efforts to replicate
and use the components of these cluster centric codes. [B,C]


James

[A] http://www.beegfs.com/content/

[B] https://forums.aws.amazon.com/thread.jspa?threadID=217783

[C]
http://searchaws.techtarget.com/news/4500272286/Amazon-EFS-stuck-in-beta-lacks-native-Windows-support




On 05/11/2016 01:07 AM, Rodrick Brown wrote:
> Does EFS count? :-)
>
> https://aws.amazon.com/efs/
>
>
> --
>
> *Rodrick Brown* / Systems Engineer
>
> +1 917 445 6839 / rodr...@orchardplatform.com
> <mailto:char...@orchardplatform.com>
>
> *Orchard Platform*
>
> 101 5th Avenue, 4th Floor, New York, NY 10003
>
> http://www.orchardplatform.com <http://www.orchardplatform.com/>
>
> Orchard Blog <http://www.orchardplatform.com/blog/> | Marketplace
> Lending Meetup <http://www.meetup.com/Peer-to-Peer-Lending-P2P/>
>
>
> On May 10 2016, at 9:07 pm, james <gar...@verizon.net> wrote:
>
> Hello,
>
>
> Has anyone customized/compiled mesos and successfully deployed a mesos
> cluster to run on cephfs, orangefs [1], or any other distributed file
> systems?
>
> If so, some detail on your setup would be appreciated.
>
>
> [1]
> 
> http://www.phoronix.com/scan.php?page=news_item=OrangeFS-Lands-Linux-4.6
>
>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient
> of this communication, please delete it immediately and notify the
> sender by return email. Unauthorized reading, dissemination,
> distribution or copying of this communication is prohibited. This
> communication does not constitute an offer to sell or a solicitation of
> an indication of interest to purchase any loan, security or any other
> financial product or instrument, nor is it an offer to sell or a
> solicitation of an indication of interest to purchase any products or
> services to any persons who are prohibited from receiving such
> information under applicable law. The contents of this communication may
> not be accurate or complete and are subject to change without notice. As
> such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or
> completeness of the information contained herein. The intended recipient
> is advised to consult its own professional advisors, including those
> specializing in legal, tax and accounting matters. Orchard does not
> provide legal, tax or accounting advice.



RE: Enable s3a for fetcher

2016-05-11 Thread Aaron Carey
We'd be very excited to see a pluggable mesos fetcher!


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Ken Sipe [kens...@gmail.com]
Sent: 11 May 2016 08:40
To: user@mesos.apache.org
Subject: Re: Enable s3a for fetcher

Jamie,

I’m in Europe this week… so the timing of my responses are out of sync / 
delayed.   There are 2 issues to work with here.  The first is having a 
pluggable mesos fetcher… sounds like that is scheduled for 0.30.   The other is 
what is available on dcos.  Could you move that discussion to that mailing 
list?  I will definitely work with you on getting this resolved.

ken
On May 10, 2016, at 3:45 PM, Briant, James 
<james.bri...@thermofisher.com<mailto:james.bri...@thermofisher.com>> wrote:

Ok. Thanks Joseph. I will figure out how to get a more recent hadoop onto my 
dcos agents then.

Jamie

From: Joseph Wu <jos...@mesosphere.io<mailto:jos...@mesosphere.io>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Tuesday, May 10, 2016 at 1:40 PM
To: user <user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Enable s3a for fetcher

I can't speak to what DCOS does or will do (you can ask on the associated 
mailing list: us...@dcos.io<mailto:us...@dcos.io>).

We will be maintaining existing functionality for the fetcher, which means 
supporting the schemes:
* file
* http, https, ftp, ftps
* hdfs, hftp, s3, s3n  <--  These rely on hadoop.

And we will retain the --hadoop_home agent flag, which you can use to specify 
the hadoop binary.

Other schemes might work right now, if you hack around with your node setup.  
But there's no guarantee that your hack will work between Mesos versions.  In 
future, we will associate a fetcher plugin for each scheme.  And you will be 
able to load custom fetcher plugins for additional schemes.
TLDR: no "nerfing" and less hackiness :)

On Tue, May 10, 2016 at 12:58 PM, Briant, James 
<james.bri...@thermofisher.com<mailto:james.bri...@thermofisher.com>> wrote:
This is the mesos latest documentation:

If the requested URI is based on some other protocol, then the fetcher tries to 
utilise a local Hadoop client and hence supports any protocol supported by the 
Hadoop client, e.g., HDFS, S3. See the slave configuration 
documentation<http://mesos.apache.org/documentation/latest/configuration/> for 
how to configure the slave with a path to the Hadoop client. [emphasis added]

What you are saying is that dcos simply wont install hadoop on agents?

Next question then: will you be nerfing fetcher.cpp, or will I be able to 
install hadoop on the agents myself, such that mesos will recognize s3a?


From: Joseph Wu <jos...@mesosphere.io<mailto:jos...@mesosphere.io>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Tuesday, May 10, 2016 at 12:20 PM
To: user <user@mesos.apache.org<mailto:user@mesos.apache.org>>

Subject: Re: Enable s3a for fetcher

Mesos does not explicitly support HDFS and S3.  Rather, Mesos will assume you 
have a hadoop binary and use it (blindly) for certain types of URIs.  If the 
hadoop binary is not present, the mesos-fetcher will fail to fetch your HDFS or 
S3 URIs.

Mesos does not ship/package hadoop, so these URIs are not expected to work out 
of the box (for plain Mesos distributions).  In all cases, the operator must 
preconfigure hadoop on each node (similar to how Docker in Mesos works).

Here's the epic tracking the modularization of the mesos-fetcher (I estimate 
it'll be done by 0.30):
https://issues.apache.org/jira/browse/MESOS-3918

^ Once done, it should be easier to plug in more fetchers, such as one for your 
use-case.

On Tue, May 10, 2016 at 11:21 AM, Briant, James 
<james.bri...@thermofisher.com<mailto:james.bri...@thermofisher.com>> wrote:
I’m happy to have default IAM role on the box that can read-only fetch from my 
s3 bucket. s3a gets the credentials from AWS instance metadata. It works.

If hadoop is gone, does that mean that hfds: URIs don’t work either?

Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs and 
s3.

In the absence of S3, how do you propose I make large binaries available to my 
cluster, and only to my cluster, on AWS?

Jamie

From: Cody Maloney <c...@mesosphere.io<mailto:c...@mesosphere.io>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Tuesday, May 10, 2016 at 10:58 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Enable s3a for fetcher

RE: Launch docker container from Marathon UI

2016-04-26 Thread Aaron Carey
Then you need to tell marathon to run the mysql container first, and then 
submit the wordpress container.

Sorry I think I misunderstood!


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Stefano Bianchi [jazzist...@gmail.com]
Sent: 26 April 2016 16:43
To: user@mesos.apache.org
Subject: RE: Launch docker container from Marathon UI


My problem is this, where can i find the mysql container? I have just said to 
marathon to rim a wordpress docker container, without specify mysql one.

Il 26/apr/2016 17:39, "Aaron Carey" <aca...@ilm.com<mailto:aca...@ilm.com>> ha 
scritto:
If you run the wordpress container on a different host to the mysql container 
and use --link on the command line, does that work?


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Stefano Bianchi [jazzist...@gmail.com<mailto:jazzist...@gmail.com>]
Sent: 26 April 2016 16:23
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: RE: Launch docker container from Marathon UI


Hi Aaron
Actually both mesos-dns and calico are running even though these containers im 
running are not, yet, using the calico ipaddr specific in json description.
So i guess it is a problem of bridging, i guess simply specify HOST option on 
container field in Marathon UI

Il 26/apr/2016 16:59, "Aaron Carey" <aca...@ilm.com<mailto:aca...@ilm.com>> ha 
scritto:
--link in docker should really be avoided when using marathon/mesos as it 
implies the containers are on the same host, but this will not always be the 
case when mesos schedules your containers (also I think it's being deprecated 
in docker anyway.. not sure though?).

This problem looks like one of service discovery within the mesos cluster: how 
does one service contact the other when it doesn't know which host the other 
service may have landed on?

There are several different solutions for service discovery, you can look into 
some like Project Calico to offer a network layer to docker or try dns based 
solutions like Mesos-dns or Consul (along with mesos-consul). I think marathon 
also has some concept of service discovery built in too if you use something 
like haproxy.

I hope this helps!

Aaron


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: June Taylor [j...@umn.edu<mailto:j...@umn.edu>]
Sent: 26 April 2016 15:22
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Launch docker container from Marathon UI

Stefano,

The docker run flag --link is intended to connect the container to another 
running container. I do not know how this would operate in marathon. Perhaps it 
would be an application group which starts up the mysql docker image first, 
then the Wordpress docker image after it.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Tue, Apr 26, 2016 at 9:20 AM, Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>> wrote:
Thanks Rad Gruchalski, actually i'm trying to make a json file that "translate" 
this command in marathon:

docker run --name some-wordpress --link some-mysql:mysql -d wordpress

i guess the error is relate to the fact that i'm not specifying --link 
some-mysql:mysql
My problem is that i don't know how to do that, i tried to fill environment 
variable fields:

Key = link
Value = some-mysql:mysql

But in this way the app does not work as well. How can i configure these env 
variables?

2016-04-26 16:01 GMT+02:00 Rad Gruchalski 
<ra...@gruchalski.com<mailto:ra...@gruchalski.com>>:
It says exactly what the problem is.

Start a marathon task with correct environment variables in env and you will be 
fine.


Best regards,

Radek Gruchalski

ra...@gruchalski.com<mailto:ra...@gruchalski.com>
<mailto:ra...@gruchalski.com>
de.linkedin.com/in/radgruchalski/<http://de.linkedin.com/in/radgruchalski/>

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On Tuesday, 26 April 2016 at 15:56, Stefano Bianchi wrote:

jupyter is working fine.
i tried to run wordpress and i get this error in stderr of mesos:


error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables

  Did you forget to --link some_mysql_container:mysql or set an external db

  with -e WORDPRESS_DB_HOST=hostname:port?



Some one of you know this issue?












2016-04-26 15:51 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
However than

RE: Launch docker container from Marathon UI

2016-04-26 Thread Aaron Carey
If you run the wordpress container on a different host to the mysql container 
and use --link on the command line, does that work?


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Stefano Bianchi [jazzist...@gmail.com]
Sent: 26 April 2016 16:23
To: user@mesos.apache.org
Subject: RE: Launch docker container from Marathon UI


Hi Aaron
Actually both mesos-dns and calico are running even though these containers im 
running are not, yet, using the calico ipaddr specific in json description.
So i guess it is a problem of bridging, i guess simply specify HOST option on 
container field in Marathon UI

Il 26/apr/2016 16:59, "Aaron Carey" <aca...@ilm.com<mailto:aca...@ilm.com>> ha 
scritto:
--link in docker should really be avoided when using marathon/mesos as it 
implies the containers are on the same host, but this will not always be the 
case when mesos schedules your containers (also I think it's being deprecated 
in docker anyway.. not sure though?).

This problem looks like one of service discovery within the mesos cluster: how 
does one service contact the other when it doesn't know which host the other 
service may have landed on?

There are several different solutions for service discovery, you can look into 
some like Project Calico to offer a network layer to docker or try dns based 
solutions like Mesos-dns or Consul (along with mesos-consul). I think marathon 
also has some concept of service discovery built in too if you use something 
like haproxy.

I hope this helps!

Aaron


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: June Taylor [j...@umn.edu<mailto:j...@umn.edu>]
Sent: 26 April 2016 15:22
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Launch docker container from Marathon UI

Stefano,

The docker run flag --link is intended to connect the container to another 
running container. I do not know how this would operate in marathon. Perhaps it 
would be an application group which starts up the mysql docker image first, 
then the Wordpress docker image after it.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Tue, Apr 26, 2016 at 9:20 AM, Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>> wrote:
Thanks Rad Gruchalski, actually i'm trying to make a json file that "translate" 
this command in marathon:

docker run --name some-wordpress --link some-mysql:mysql -d wordpress

i guess the error is relate to the fact that i'm not specifying --link 
some-mysql:mysql
My problem is that i don't know how to do that, i tried to fill environment 
variable fields:

Key = link
Value = some-mysql:mysql

But in this way the app does not work as well. How can i configure these env 
variables?

2016-04-26 16:01 GMT+02:00 Rad Gruchalski 
<ra...@gruchalski.com<mailto:ra...@gruchalski.com>>:
It says exactly what the problem is.

Start a marathon task with correct environment variables in env and you will be 
fine.


Best regards,

Radek Gruchalski

ra...@gruchalski.com<mailto:ra...@gruchalski.com>
<mailto:ra...@gruchalski.com>
de.linkedin.com/in/radgruchalski/<http://de.linkedin.com/in/radgruchalski/>

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On Tuesday, 26 April 2016 at 15:56, Stefano Bianchi wrote:

jupyter is working fine.
i tried to run wordpress and i get this error in stderr of mesos:


error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables

  Did you forget to --link some_mysql_container:mysql or set an external db

  with -e WORDPRESS_DB_HOST=hostname:port?



Some one of you know this issue?












2016-04-26 15:51 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
However thank you so much to all!

2016-04-26 15:22 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
i found the answer by my self sorry if i disturbed you.

2016-04-26 15:19 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
Now that it is running, how can i check the User Interface?


2016-04-26 15:18 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
Yes, now it's running!!! June you are awesome!!!

2016-04-26 15:16 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
i have done these 2 commands, now jupyter is in deploying in marathon, staging 
in mesos.
Is there some additional configuration needed

RE: Launch docker container from Marathon UI

2016-04-26 Thread Aaron Carey
--link in docker should really be avoided when using marathon/mesos as it 
implies the containers are on the same host, but this will not always be the 
case when mesos schedules your containers (also I think it's being deprecated 
in docker anyway.. not sure though?).

This problem looks like one of service discovery within the mesos cluster: how 
does one service contact the other when it doesn't know which host the other 
service may have landed on?

There are several different solutions for service discovery, you can look into 
some like Project Calico to offer a network layer to docker or try dns based 
solutions like Mesos-dns or Consul (along with mesos-consul). I think marathon 
also has some concept of service discovery built in too if you use something 
like haproxy.

I hope this helps!

Aaron


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: June Taylor [j...@umn.edu]
Sent: 26 April 2016 15:22
To: user@mesos.apache.org
Subject: Re: Launch docker container from Marathon UI

Stefano,

The docker run flag --link is intended to connect the container to another 
running container. I do not know how this would operate in marathon. Perhaps it 
would be an application group which starts up the mysql docker image first, 
then the Wordpress docker image after it.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Tue, Apr 26, 2016 at 9:20 AM, Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>> wrote:
Thanks Rad Gruchalski, actually i'm trying to make a json file that "translate" 
this command in marathon:

docker run --name some-wordpress --link some-mysql:mysql -d wordpress

i guess the error is relate to the fact that i'm not specifying --link 
some-mysql:mysql
My problem is that i don't know how to do that, i tried to fill environment 
variable fields:

Key = link
Value = some-mysql:mysql

But in this way the app does not work as well. How can i configure these env 
variables?

2016-04-26 16:01 GMT+02:00 Rad Gruchalski 
<ra...@gruchalski.com<mailto:ra...@gruchalski.com>>:
It says exactly what the problem is.

Start a marathon task with correct environment variables in env and you will be 
fine.


Best regards,

Radek Gruchalski

ra...@gruchalski.com<mailto:ra...@gruchalski.com>
<mailto:ra...@gruchalski.com>
de.linkedin.com/in/radgruchalski/<http://de.linkedin.com/in/radgruchalski/>

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On Tuesday, 26 April 2016 at 15:56, Stefano Bianchi wrote:

jupyter is working fine.
i tried to run wordpress and i get this error in stderr of mesos:


error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables

  Did you forget to --link some_mysql_container:mysql or set an external db

  with -e WORDPRESS_DB_HOST=hostname:port?



Some one of you know this issue?












2016-04-26 15:51 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
However thank you so much to all!

2016-04-26 15:22 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
i found the answer by my self sorry if i disturbed you.

2016-04-26 15:19 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
Now that it is running, how can i check the User Interface?


2016-04-26 15:18 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
Yes, now it's running!!! June you are awesome!!!

2016-04-26 15:16 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
i have done these 2 commands, now jupyter is in deploying in marathon, staging 
in mesos.
Is there some additional configuration needed?

2016-04-26 15:13 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
June i tried to run your json, but the task is waiting, and if starts it failed 
immediately.

I guess because i did not type this commands:


  1.

echo 'docker,mesos' > /etc/mesos-slave/containerizers

  2.

$ echo '5mins' > /etc/mesos-slave/executor_registration_timeout

Could it be the problem?

2016-04-26 15:02 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
Thank you June taylor, it is axactly what i was intending.
if it is not disturbing you, i try jupyter notebook as well, just to make some 
tests on how to launch marathon.
stay tuned :)

2016-04-26 14:58 GMT+02:00 Stefano Bianchi 
<jazzist...@gmail.com<mailto:jazzist...@gmail.com>>:
thanks haosdent.
actually i have run this kind of app

RE: Running Mesos agent on ARM (Raspberry Pi)?

2016-04-25 Thread Aaron Carey
Out of curiosity... is this for fun or production workloads? I'd be curious to 
hear about raspis being used in production!


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Sharma Podila [spod...@netflix.com]
Sent: 22 April 2016 17:53
To: user@mesos.apache.org; dev
Subject: Running Mesos agent on ARM (Raspberry Pi)?

We are working on a hack to run Mesos agents on Raspberry Pi and are wondering 
if anyone here has done that before. From the Google search results we looked 
at so far, it seems like it has been compiled, but we haven't seen an 
indication that anyone has run it and launched tasks on them. And does it sound 
right that it might take 4 hours or so to compile?

We are looking to run just the agents. The master will be on a regular Ubuntu 
laptop or a server.

Appreciate any pointers.




RE: Altering agent resrouces after startup

2016-04-20 Thread Aaron Carey
Thanks Klaus,

After reading Haosdent's response I have a feeling a ticket may already exist: 
MESOS-3059 which would work for our usecase, however I can't actually access 
that ticket.

I'm happy to create a new one if needed though?

Thanks,
Aaron
--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Klaus Ma [klaus1982...@gmail.com]
Sent: 20 April 2016 10:18
To: user@mesos.apache.org
Subject: Re: Altering agent resrouces after startup

Hi Aaron,

Currently, the resources in slave can NOT be updated after started; the QoS can 
only report revocable resources. But I think this reasonable requirement to 
detect resource on the fly; would you help to open an JIRA for this? I think 
there're two sub-requirement of this scenario:

1. The resources of slave will be updated on the fly; it's different with 
MESOS-1739 which focus on agent restart
2. Self-defined resources which is only consumed special resources

If any comments, please let me know.

Thanks
Klaus


On Wed, Apr 20, 2016 at 4:56 PM Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Hi All,

I was wondering if it was possible somehow to alter an agent's resources after 
it has started?

Example: we are dynamically attaching and detaching EBS volumes to EC2 hosts 
running as agents. (This is part of our docker volume setup using RexRay). When 
a host has an EBS volume attached to it I'd like to be able to mark that as a 
new resource on the agent. Note that it's not the disk space we care about 
here, just the name of the volume itself. This would then allow us to schedule 
tasks that require access to the data on that EBS volume all on the same host.

Anyone have any ideas?

Thanks!


Aaron
--

Regards,

Da (Klaus), Ma (马达), PMP® | Advisory Software Engineer
IBM Platform Development & Support, STG, IBM GCG
+86-10-8245 4084 | mad...@cn.ibm.com<mailto:mad...@cn.ibm.com> | http://k82.me


RE: Altering agent resrouces after startup

2016-04-20 Thread Aaron Carey
Ah thank you! I tried searching Jira but didn't find that ticket.

Yes I think you might be right about the attributes, although I don't seem to 
be able to get to the MESOS-3059 ticket in Jira, do you know if it's on the 
roadmap?

Thanks,
Aaron

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: haosdent [haosd...@gmail.com]
Sent: 20 April 2016 10:12
To: user
Subject: Re: Altering agent resrouces after startup

There is a ticket [Allow slave reconfiguration on 
restart](https://issues.apache.org/jira/browse/MESOS-1739) related to this and 
not implement yet. But your requirement seems not related to change resources 
of agent dynamically. It looks more like change labels/attributes dynamically 
of agent.

On Wed, Apr 20, 2016 at 4:56 PM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Hi All,

I was wondering if it was possible somehow to alter an agent's resources after 
it has started?

Example: we are dynamically attaching and detaching EBS volumes to EC2 hosts 
running as agents. (This is part of our docker volume setup using RexRay). When 
a host has an EBS volume attached to it I'd like to be able to mark that as a 
new resource on the agent. Note that it's not the disk space we care about 
here, just the name of the volume itself. This would then allow us to schedule 
tasks that require access to the data on that EBS volume all on the same host.

Anyone have any ideas?

Thanks!

Aaron



--
Best Regards,
Haosdent Huang


Altering agent resrouces after startup

2016-04-20 Thread Aaron Carey
Hi All,

I was wondering if it was possible somehow to alter an agent's resources after 
it has started?

Example: we are dynamically attaching and detaching EBS volumes to EC2 hosts 
running as agents. (This is part of our docker volume setup using RexRay). When 
a host has an EBS volume attached to it I'd like to be able to mark that as a 
new resource on the agent. Note that it's not the disk space we care about 
here, just the name of the volume itself. This would then allow us to schedule 
tasks that require access to the data on that EBS volume all on the same host.

Anyone have any ideas?

Thanks!

Aaron


RE: running mesos slave in a docker container

2016-03-16 Thread Aaron Carey
Hmm.. I'm not sure... I can't seem to find the linked Dockerfiles on Github 
either..

Could one of the maintainers point us in the right direction?

I hope dind builds will still be supported!


From: Yuri Finkelstein [yurif2...@gmail.com]
Sent: 15 March 2016 18:35
To: user@mesos.apache.org
Subject: Re: running mesos slave in a docker container

Thanks for pointing this out, I did not see this one. . Wow, that's exactly 
what one needs to run mesos slave in a docker. But the image is not kept up to 
date. The latest tag is 0.2.4_mesos-0.26.0_docker-1.8.2_ubuntu-14.04.3
Do you know how one can trigger an update to keep it on par with 
mesosphere/mesos-slave?

On Tue, Mar 15, 2016 at 1:53 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Would the officially provided docker-in-docker image help?

mesosphere/mesos-slave-dind



From: Yuri Finkelstein [yurif2...@gmail.com<mailto:yurif2...@gmail.com>]
Sent: 15 March 2016 04:25
To: user@mesos.apache.org<mailto:user@mesos.apache.org>

Subject: Re: running mesos slave in a docker container

Sure, but my point what - why would mesosphere not put docker binary in the 
official docker image? Maintaining my own docker image of anything is the last 
instrument I use. That's what "official" images are for after all.

On Mon, Mar 14, 2016 at 8:30 PM, Yong Tang 
<yong.tang.git...@outlook.com<mailto:yong.tang.git...@outlook.com>> wrote:
One way to avoid map library dependencies of docker between host and docker 
containers is to install binaries of docker into the docker container:


https://docs.docker.com/engine/installation/binaries/


and then map /var/run/docker.sock between host and docker containers. In this 
way, library dependencies conflicts between host and docker containers could be 
mostly avoided.


Thanks
Yong


Date: Mon, 14 Mar 2016 18:49:45 -0700
Subject: Re: running mesos slave in a docker container
From: yurif2...@gmail.com<mailto:yurif2...@gmail.com>
To: user@mesos.apache.org<mailto:user@mesos.apache.org>


Enumerating each and every lib path and dealing with potential conflicts 
between host  and docker libc, etc - I didn't want to deal with this option, 
it's quite bad imho.

On Mon, Mar 14, 2016 at 6:42 PM, haosdent 
<haosd...@gmail.com<mailto:haosd...@gmail.com>> wrote:
>2. --volumes-from
So far DockerContainerizer in Mesos don't support this option.

>1. What is the best method to point mesos-slave running in a container to a 
>working
Usually I mount docker binary to container from host.

```
docker run --privileged -d \
--name=mesos-slave \
--net=host \
-p 31000-31300:31000-31300 \
-p 5051:5051 \
-v /usr/bin/docker:/bin/docker \
-v /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1:/usr/lib/libdevmapper.so.1.02 \
-v /lib/x86_64-linux-gnu/libpthread.so.0:/lib/libpthread.so.0 \
-v /usr/lib/x86_64-linux-gnu/libsqlite3.so:/lib/libsqlite3.so.0 \
-v /lib/x86_64-linux-gnu/libudev.so.1:/lib/libudev.so.1
-v /var/run/docker.sock:/var/run/docker.sock \
-v /sys:/sys \
-v /tmp:/tmp \
-e MESOS_MASTER=zk://10.10.10.9:2181/mesos<http://10.10.10.9:2181/mesos> \
-e MESOS_LOG_DIR=/tmp/log \
-e MESOS_CONTAINERIZERS=docker \
-e MESOS_LOGGING_LEVEL=INFO \
-e MESOS_IP=10.10.10.9 \
-e MESOS_WORK_DIR=/tmp
mesosphere/mesos-slave mesos-slave
```

On Tue, Mar 15, 2016 at 8:47 AM, Yuri Finkelstein 
<yurif2...@gmail.com<mailto:yurif2...@gmail.com>> wrote:
Since mesosphere distributes images of mesos software in a container 
(https://hub.docker.com/r/mesosphere/mesos-slave/), I decided to try this 
option. After trying this with various settings I settled on a configuration 
that basically works. But I do see one problem and this is what this message 
about.

To start off, I find it strange that the image does not contain docker 
distribution itself. After all, in order to use containnerizer=mesos one needs 
to point mesos slave at a docker binary. If I bind-mount docker binary to 
container's /usr/local/bin/mesos and use option --mesos=/usr/local/bin/mesos I 
run into the problem of dynamic library dependencies: mesos depends on a bunch 
of dyanmic libraries:
==
ldd /usr/bin/docker
linux-vdso.so.1 =>  (0x7fffaebfe000)
libsystemd-journal.so.0 => /lib/x86_64-linux-gnu/libsystemd-journal.so.0 
(0x7f0a1458b000)
libapparmor.so.1 => /usr/lib/x86_64-linux-gnu/libapparmor.so.1 
(0x7f0a1437f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f0a1416)
libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1 
(0x7f0a13f27000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f0a13b62000)
... and many more
===
Mounting /lib/x86_64-linux-gnu/ in docker is a horrible idea which is not worth 
discussing. So I wonder what is the rational behind decision to not include 
doc

RE: running mesos slave in a docker container

2016-03-15 Thread Aaron Carey
Would the officially provided docker-in-docker image help?

mesosphere/mesos-slave-dind



From: Yuri Finkelstein [yurif2...@gmail.com]
Sent: 15 March 2016 04:25
To: user@mesos.apache.org
Subject: Re: running mesos slave in a docker container

Sure, but my point what - why would mesosphere not put docker binary in the 
official docker image? Maintaining my own docker image of anything is the last 
instrument I use. That's what "official" images are for after all.

On Mon, Mar 14, 2016 at 8:30 PM, Yong Tang 
> wrote:
One way to avoid map library dependencies of docker between host and docker 
containers is to install binaries of docker into the docker container:


https://docs.docker.com/engine/installation/binaries/


and then map /var/run/docker.sock between host and docker containers. In this 
way, library dependencies conflicts between host and docker containers could be 
mostly avoided.


Thanks
Yong


Date: Mon, 14 Mar 2016 18:49:45 -0700
Subject: Re: running mesos slave in a docker container
From: yurif2...@gmail.com
To: user@mesos.apache.org


Enumerating each and every lib path and dealing with potential conflicts 
between host  and docker libc, etc - I didn't want to deal with this option, 
it's quite bad imho.

On Mon, Mar 14, 2016 at 6:42 PM, haosdent 
> wrote:
>2. --volumes-from
So far DockerContainerizer in Mesos don't support this option.

>1. What is the best method to point mesos-slave running in a container to a 
>working
Usually I mount docker binary to container from host.

```
docker run --privileged -d \
--name=mesos-slave \
--net=host \
-p 31000-31300:31000-31300 \
-p 5051:5051 \
-v /usr/bin/docker:/bin/docker \
-v /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1:/usr/lib/libdevmapper.so.1.02 \
-v /lib/x86_64-linux-gnu/libpthread.so.0:/lib/libpthread.so.0 \
-v /usr/lib/x86_64-linux-gnu/libsqlite3.so:/lib/libsqlite3.so.0 \
-v /lib/x86_64-linux-gnu/libudev.so.1:/lib/libudev.so.1
-v /var/run/docker.sock:/var/run/docker.sock \
-v /sys:/sys \
-v /tmp:/tmp \
-e MESOS_MASTER=zk://10.10.10.9:2181/mesos \
-e MESOS_LOG_DIR=/tmp/log \
-e MESOS_CONTAINERIZERS=docker \
-e MESOS_LOGGING_LEVEL=INFO \
-e MESOS_IP=10.10.10.9 \
-e MESOS_WORK_DIR=/tmp
mesosphere/mesos-slave mesos-slave
```

On Tue, Mar 15, 2016 at 8:47 AM, Yuri Finkelstein 
> wrote:
Since mesosphere distributes images of mesos software in a container 
(https://hub.docker.com/r/mesosphere/mesos-slave/), I decided to try this 
option. After trying this with various settings I settled on a configuration 
that basically works. But I do see one problem and this is what this message 
about.

To start off, I find it strange that the image does not contain docker 
distribution itself. After all, in order to use containnerizer=mesos one needs 
to point mesos slave at a docker binary. If I bind-mount docker binary to 
container's /usr/local/bin/mesos and use option --mesos=/usr/local/bin/mesos I 
run into the problem of dynamic library dependencies: mesos depends on a bunch 
of dyanmic libraries:
==
ldd /usr/bin/docker
linux-vdso.so.1 =>  (0x7fffaebfe000)
libsystemd-journal.so.0 => /lib/x86_64-linux-gnu/libsystemd-journal.so.0 
(0x7f0a1458b000)
libapparmor.so.1 => /usr/lib/x86_64-linux-gnu/libapparmor.so.1 
(0x7f0a1437f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f0a1416)
libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1 
(0x7f0a13f27000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f0a13b62000)
... and many more
===
Mounting /lib/x86_64-linux-gnu/ in docker is a horrible idea which is not worth 
discussing. So I wonder what is the rational behind decision to not include 
docker binary into the mesosphere container and how do other people solve this 
problem.


Here is one solution that I found. I use docker:dind but not as running 
container but rather as a volume:

==
docker create --name "docker-proxy" -v 
/var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin docker:dind
===


This container contains a fully functional docker binary in its /usr/local/bin, 
and this is all I need it for. To make the mesos-slave container see this 
binary I simply use --volumes-from option:
==
docker run -d --restart=unless-stopped --volumes-from "docker-proxy" 
--docker=/usr/local/bin/docker --containerizers="docker,mesos" --name 
$MESOS_SLAVE $MESOS_SLAVE_IMAGE ...
==

This works like a charm. But, there is the following problem.
In order for mesos-slave to function in this mode, it needs to spawn executors 
in docker container as well. For that purpose mesos slave 

RE: Downloading s3 uris

2016-02-29 Thread Aaron Carey
Ah I think I've solved my own problem:

We are using the dind mesos slave container and hadn't mounted the host's 
/var/lib/mesos folder within the container so it wasn't showing up in the other 
containers.

Oops!

Thanks for your help,

Aaron


From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:53
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

Sorry apparently the inline image didn't work:

http://i.imgur.com/x1cPXvW.png


From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:50
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

To illustrate:

[X]

/var/lib/mesos/slaves/20160212-131720-1510021036-5050-1-S0/frameworks/20160212-131720-1510021036-5050-1-/executors/test.3d01cbd4-dcb4-11e5-868c-02420a4d969a/runs/25d29966-0515-4aa6-8d99-63a377aa68e8$
 ls -alh
total 8.0K
drwxr-xr-x 2 root root 4.0K Feb 26 18:10 .
drwxr-xr-x 3 root root 4.0K Feb 26 18:10 ..





From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:45
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

Yeah, I've managed to find the sandbox itself on disk, but it's empty, even 
though the file shows up in the web UI...

My task is a docker container and it doesn't show up in the container either

Any ideas?

Thanks!
Aaron


From: Joseph Wu [jos...@mesosphere.io]
Sent: 26 February 2016 18:27
To: user@mesos.apache.org
Subject: Re: Downloading s3 uris

The sandbox directory structure is a bit deep...  See the "Where is the 
sandbox?" section here: http://mesos.apache.org/documentation/latest/sandbox/


On Fri, Feb 26, 2016 at 10:15 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
A second question for you all..

I'm testing http uri downloads, and all the logs say that the file has 
downloaded (it even shows up in the mesos UI in the sandbox) but I can't find 
the file on disk anywhere. It doesn't appear in the docker container I'm 
running either (shouldn't it be in /mnt/mesos/sandbox?)

Am I missing something here?

Thanks for your help,

Aaron



From: Radoslaw Gruchalski [ra...@gruchalski.com<mailto:ra...@gruchalski.com>]
Sent: 26 February 2016 17:41

To: user@mesos.apache.org<mailto:user@mesos.apache.org>; 
user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Downloading s3 uris

Just keep in mind that every execution of such command starts a jvm and is, 
generally, heavyweight. Use WebHDFS if you can.

Sent from Outlook Mobile<https://aka.ms/qtex0l>




On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" 
<linshuai2...@gmail.com<mailto:linshuai2...@gmail.com>> wrote:

If you don't want to configure hadoop on your mesos slaves, the only workaround 
I see is to write a "hadoop" script and put it in your PATH. It need to support 
the following usage patterns:

- hadoop version
- hadoop fs -copyToLocal s3n://path /target/directory/

On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang




RE: Downloading s3 uris

2016-02-29 Thread Aaron Carey
Sorry apparently the inline image didn't work:

http://i.imgur.com/x1cPXvW.png


From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:50
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

To illustrate:

[X]

/var/lib/mesos/slaves/20160212-131720-1510021036-5050-1-S0/frameworks/20160212-131720-1510021036-5050-1-/executors/test.3d01cbd4-dcb4-11e5-868c-02420a4d969a/runs/25d29966-0515-4aa6-8d99-63a377aa68e8$
 ls -alh
total 8.0K
drwxr-xr-x 2 root root 4.0K Feb 26 18:10 .
drwxr-xr-x 3 root root 4.0K Feb 26 18:10 ..





From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:45
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

Yeah, I've managed to find the sandbox itself on disk, but it's empty, even 
though the file shows up in the web UI...

My task is a docker container and it doesn't show up in the container either

Any ideas?

Thanks!
Aaron


From: Joseph Wu [jos...@mesosphere.io]
Sent: 26 February 2016 18:27
To: user@mesos.apache.org
Subject: Re: Downloading s3 uris

The sandbox directory structure is a bit deep...  See the "Where is the 
sandbox?" section here: http://mesos.apache.org/documentation/latest/sandbox/


On Fri, Feb 26, 2016 at 10:15 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
A second question for you all..

I'm testing http uri downloads, and all the logs say that the file has 
downloaded (it even shows up in the mesos UI in the sandbox) but I can't find 
the file on disk anywhere. It doesn't appear in the docker container I'm 
running either (shouldn't it be in /mnt/mesos/sandbox?)

Am I missing something here?

Thanks for your help,

Aaron



From: Radoslaw Gruchalski [ra...@gruchalski.com<mailto:ra...@gruchalski.com>]
Sent: 26 February 2016 17:41

To: user@mesos.apache.org<mailto:user@mesos.apache.org>; 
user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Downloading s3 uris

Just keep in mind that every execution of such command starts a jvm and is, 
generally, heavyweight. Use WebHDFS if you can.

Sent from Outlook Mobile<https://aka.ms/qtex0l>




On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" 
<linshuai2...@gmail.com<mailto:linshuai2...@gmail.com>> wrote:

If you don't want to configure hadoop on your mesos slaves, the only workaround 
I see is to write a "hadoop" script and put it in your PATH. It need to support 
the following usage patterns:

- hadoop version
- hadoop fs -copyToLocal s3n://path /target/directory/

On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang




RE: Downloading s3 uris

2016-02-29 Thread Aaron Carey
To illustrate:

[X]

/var/lib/mesos/slaves/20160212-131720-1510021036-5050-1-S0/frameworks/20160212-131720-1510021036-5050-1-/executors/test.3d01cbd4-dcb4-11e5-868c-02420a4d969a/runs/25d29966-0515-4aa6-8d99-63a377aa68e8$
 ls -alh
total 8.0K
drwxr-xr-x 2 root root 4.0K Feb 26 18:10 .
drwxr-xr-x 3 root root 4.0K Feb 26 18:10 ..





From: Aaron Carey [aca...@ilm.com]
Sent: 29 February 2016 08:45
To: user@mesos.apache.org
Subject: RE: Downloading s3 uris

Yeah, I've managed to find the sandbox itself on disk, but it's empty, even 
though the file shows up in the web UI...

My task is a docker container and it doesn't show up in the container either

Any ideas?

Thanks!
Aaron


From: Joseph Wu [jos...@mesosphere.io]
Sent: 26 February 2016 18:27
To: user@mesos.apache.org
Subject: Re: Downloading s3 uris

The sandbox directory structure is a bit deep...  See the "Where is the 
sandbox?" section here: http://mesos.apache.org/documentation/latest/sandbox/


On Fri, Feb 26, 2016 at 10:15 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
A second question for you all..

I'm testing http uri downloads, and all the logs say that the file has 
downloaded (it even shows up in the mesos UI in the sandbox) but I can't find 
the file on disk anywhere. It doesn't appear in the docker container I'm 
running either (shouldn't it be in /mnt/mesos/sandbox?)

Am I missing something here?

Thanks for your help,

Aaron



From: Radoslaw Gruchalski [ra...@gruchalski.com<mailto:ra...@gruchalski.com>]
Sent: 26 February 2016 17:41

To: user@mesos.apache.org<mailto:user@mesos.apache.org>; 
user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Downloading s3 uris

Just keep in mind that every execution of such command starts a jvm and is, 
generally, heavyweight. Use WebHDFS if you can.

Sent from Outlook Mobile<https://aka.ms/qtex0l>




On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" 
<linshuai2...@gmail.com<mailto:linshuai2...@gmail.com>> wrote:

If you don't want to configure hadoop on your mesos slaves, the only workaround 
I see is to write a "hadoop" script and put it in your PATH. It need to support 
the following usage patterns:

- hadoop version
- hadoop fs -copyToLocal s3n://path /target/directory/

On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang




RE: Downloading s3 uris

2016-02-26 Thread Aaron Carey
A second question for you all..

I'm testing http uri downloads, and all the logs say that the file has 
downloaded (it even shows up in the mesos UI in the sandbox) but I can't find 
the file on disk anywhere. It doesn't appear in the docker container I'm 
running either (shouldn't it be in /mnt/mesos/sandbox?)

Am I missing something here?

Thanks for your help,

Aaron



From: Radoslaw Gruchalski [ra...@gruchalski.com]
Sent: 26 February 2016 17:41
To: user@mesos.apache.org; user@mesos.apache.org
Subject: Re: Downloading s3 uris

Just keep in mind that every execution of such command starts a jvm and is, 
generally, heavyweight. Use WebHDFS if you can.

Sent from Outlook Mobile<https://aka.ms/qtex0l>




On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" 
<linshuai2...@gmail.com<mailto:linshuai2...@gmail.com>> wrote:

If you don't want to configure hadoop on your mesos slaves, the only workaround 
I see is to write a "hadoop" script and put it in your PATH. It need to support 
the following usage patterns:

- hadoop version
- hadoop fs -copyToLocal s3n://path /target/directory/

On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang



RE: Downloading s3 uris

2016-02-26 Thread Aaron Carey
I know this is a stupid question...

but how do I just install the client without all the rest of the stuff?


From: haosdent [haosd...@gmail.com]
Sent: 26 February 2016 16:50
To: user
Subject: Re: Downloading s3 uris

So far have to install a hdfs client if you want to use "s3n://xxx". :-(

On Sat, Feb 27, 2016 at 12:39 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
Agreed with @aaron, it will be too much manual work to generate S3 url 
everytime.

Thanks
Abhishek

From: Aaron Carey <aca...@ilm.com<mailto:aca...@ilm.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:31 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: RE: Downloading s3 uris

I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang



--
Best Regards,
Haosdent Huang


RE: Downloading s3 uris

2016-02-26 Thread Aaron Carey
I was trying to avoid generating urls for everything as this will complicate 
things a lot.

Is there a straight forward way to get the fetcher to do it directly?


From: haosdent [haosd...@gmail.com]
Sent: 26 February 2016 16:27
To: user
Subject: Re: Downloading s3 uris

I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html

On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<abhishek.amral...@talentica.com<mailto:abhishek.amral...@talentica.com>> wrote:
In that case do we need to keep bucket/files public?

-Abhishek

From: Zhitao Li <zhi...@uber.com<mailto:zhi...@uber.com>>
Reply-To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Date: Friday, 26 February 2016 at 8:23 AM
To: "user@mesos.apache.org<mailto:user@mesos.apache.org>" 
<user@mesos.apache.org<mailto:user@mesos.apache.org>>
Subject: Re: Downloading s3 uris

Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use 
http<http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
 url instead.
On Feb 26, 2016, at 8:17 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:

I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron




--
Best Regards,
Haosdent Huang


Downloading s3 uris

2016-02-26 Thread Aaron Carey
I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.

Is it possible to just have the client running without a full hdfs setup?

I haven't been able to find much information in the docs, could someone point 
me in the right direction?

Thanks!

Aaron


RE: Mesos metrics -> influxdb

2016-02-26 Thread Aaron Carey
Thanks Alberto, those are the two projects I was having issues with. I was 
hoping not to have to fork the first one (to add 0.24 functionality) and the 
second just keeps timing out and crashing (!)

I'll have a look at telegraph!

Thanks all for your help,

Aaron


From: Alberto del Barrio [alberto.delbarrio.albe...@gmail.com]
Sent: 26 February 2016 08:04
To: user@mesos.apache.org
Subject: Re: Mesos metrics -> influxdb

I am using collectd to gather those metrics and I am pretty happy with it.
I use two plugins:
 - To gather mesos metrics (System metrics, number of slaves connected... )
https://github.com/rayrod2030/collectd-mesos
 - To gather CPU and memory used per mesos tasks: 
https://github.com/bobrik/collectd-mesos-tasks

The setup is stable, it never crashed. Regarding the version 0.24 it can work 
with few modifications.


On 02/25/16 18:32, Aaron Carey wrote:
Has anyone had a good experience recording mesos metrics into influxdb?

I've found a couple of options, a collectd plugin which doesn't appear to work 
with version 0.24.x and a more up to date containerised option which randomly 
crashes regularly and doesn't appear to actually post any stats.

Anyone have any good solutions?





Mesos metrics -> influxdb

2016-02-25 Thread Aaron Carey
Has anyone had a good experience recording mesos metrics into influxdb?

I've found a couple of options, a collectd plugin which doesn't appear to work 
with version 0.24.x and a more up to date containerised option which randomly 
crashes regularly and doesn't appear to actually post any stats.

Anyone have any good solutions?




RE: AW: Feature request: move in-flight containers w/o stopping them

2016-02-22 Thread Aaron Carey
If this is of any use to anyone: There is also an outstanding branch of Docker 
which has checkpoint/restore functionality in it (based on CRIU I believe) 
which is hopefully being merged into experimental soon.


From: Sharma Podila [spod...@netflix.com]
Sent: 19 February 2016 14:49
To: user@mesos.apache.org
Subject: Re: AW: Feature request: move in-flight containers w/o stopping them

Moving stateless services can be trivial or a non problem, as others have 
suggested.
Migrating state full services becomes a function of migrating the state, 
including any network conx, etc. To think aloud, from a bit of past 
considerations in hpc like systems, some systems relied upon the underlying 
systems to support migration (vMotion, etc.), to 3rd party libraries (was that 
Meiosys) that could work on existing application binaries, to libraries 
(BLCR) 
that need support from application developer. I was involved with providing 
support for BLCR based applications. One of the challenges was the time to 
checkpoint an application with large memory footprint, say, 100 GB or more, 
which isn't uncommon in hpc. Incremental checkpointing wasn't an option, at 
least at that point.
Regardless, Mesos' support for checkpoint-restore would have to consider the 
type of checkpoint-restore being used. I would imagine that the core part of 
the solution would be simple'ish, in providing a "workflow" for the 
checkpoint-restore system (sort of send signal to start checkpoint, wait 
certain time to complete or timeout). Relatively less simple would be the 
actual integration of the checkpoint-restore system and dealing with its 
constraints and idiosyncrasies.


On Fri, Feb 19, 2016 at 4:50 AM, Dick Davies 
> wrote:
Agreed, vMotion always struck me as something for those monolithic
apps with a lot of local state.

The industry seems to be moving away from that as fast as its little
legs will carry it.

On 19 February 2016 at 11:35, Jason Giedymin 
> wrote:
> Food for thought:
>
> One should refrain from monolithic apps. If they're small and stateless you
> should be doing rolling upgrades.
>
> If you find yourself with one container and you can't easily distribute that
> work load by just scaling and load balancing then you have a monolith. Time
> to enhance it.
>
> Containers should not be treated like VMs.
>
> -Jason
>
> On Feb 19, 2016, at 6:05 AM, Mike Michel 
> > wrote:
>
> Question is if you really need this when you are moving in the world of
> containers/microservices where it is about building stateless 12factor apps
> except databases. Why moving a service when you can just kill it and let the
> work be done by 10 other containers doing the same? I remember a talk on
> dockercon about containers and live migration. It was like: „And now where
> you know how to do it, dont’t do it!“
>
>
>
> Von: Avinash Sridharan 
> [mailto:avin...@mesosphere.io]
> Gesendet: Freitag, 19. Februar 2016 05:48
> An: user@mesos.apache.org
> Betreff: Re: Feature request: move in-flight containers w/o stopping them
>
>
>
> One problem with implementing something like vMotion for Mesos is to address
> seamless movement of network connectivity as well. This effectively requires
> moving the IP address of the container across hosts. If the container shares
> host network stack, this won't be possible since this would imply moving the
> host IP address from one host to another. When a container has its network
> namespace, attached to the host, using a bridge, moving across L2 segments
> might be a possibility. To move across L3 segments you will need some form
> of overlay (VxLAN maybe ?) .
>
>
>
> On Thu, Feb 18, 2016 at 7:34 PM, Jay Taylor 
> > wrote:
>
> Is this theoretically feasible with Linux checkpoint and restore, perhaps
> via CRIU?http://criu.org/Main_Page
>
>
> On Feb 18, 2016, at 4:35 AM, Paul Bell 
> > wrote:
>
> Hello All,
>
>
>
> Has there ever been any consideration of the ability to move in-flight
> containers from one Mesos host node to another?
>
>
>
> I see this as analogous to VMware's "vMotion" facility wherein VMs can be
> moved from one ESXi host to another.
>
>
>
> I suppose something like this could be useful from a load-balancing
> perspective.
>
>
>
> Just curious if it's ever been considered and if so - and rejected - why
> rejected?
>
>
>
> Thanks.
>
>
>
> -Paul
>
>
>
>
>
>
>
>
>
> --
>
> Avinash Sridharan, Mesosphere
>
> +1 (323) 702 5245



RE: AW: Feature request: move in-flight containers w/o stopping them

2016-02-22 Thread Aaron Carey
Would you be able to elaborate a bit more on how you did this?


From: Mauricio Garavaglia [mauri...@medallia.com]
Sent: 19 February 2016 19:20
To: user@mesos.apache.org
Subject: Re: AW: Feature request: move in-flight containers w/o stopping them

Mesos is not only about running stateless microservices to handle http 
requests. There are long duration workloads that would benefit from being 
rescheduled to a different host and not being interrupted; i.e. to implement 
dynamic bin packing in the cluster.

The networking issues has been proved through CRIU that is possible even at the 
socket level. Regarding IP moving around, Project 
Calico offers a way to do that; We tried with a 
homemade modifications to do it using docker and OSPF and it works very well.

On Fri, Feb 19, 2016 at 11:49 AM, Sharma Podila 
> wrote:
Moving stateless services can be trivial or a non problem, as others have 
suggested.
Migrating state full services becomes a function of migrating the state, 
including any network conx, etc. To think aloud, from a bit of past 
considerations in hpc like systems, some systems relied upon the underlying 
systems to support migration (vMotion, etc.), to 3rd party libraries (was that 
Meiosys) that could work on existing application binaries, to libraries 
(BLCR) 
that need support from application developer. I was involved with providing 
support for BLCR based applications. One of the challenges was the time to 
checkpoint an application with large memory footprint, say, 100 GB or more, 
which isn't uncommon in hpc. Incremental checkpointing wasn't an option, at 
least at that point.
Regardless, Mesos' support for checkpoint-restore would have to consider the 
type of checkpoint-restore being used. I would imagine that the core part of 
the solution would be simple'ish, in providing a "workflow" for the 
checkpoint-restore system (sort of send signal to start checkpoint, wait 
certain time to complete or timeout). Relatively less simple would be the 
actual integration of the checkpoint-restore system and dealing with its 
constraints and idiosyncrasies.


On Fri, Feb 19, 2016 at 4:50 AM, Dick Davies 
> wrote:
Agreed, vMotion always struck me as something for those monolithic
apps with a lot of local state.

The industry seems to be moving away from that as fast as its little
legs will carry it.

On 19 February 2016 at 11:35, Jason Giedymin 
> wrote:
> Food for thought:
>
> One should refrain from monolithic apps. If they're small and stateless you
> should be doing rolling upgrades.
>
> If you find yourself with one container and you can't easily distribute that
> work load by just scaling and load balancing then you have a monolith. Time
> to enhance it.
>
> Containers should not be treated like VMs.
>
> -Jason
>
> On Feb 19, 2016, at 6:05 AM, Mike Michel 
> > wrote:
>
> Question is if you really need this when you are moving in the world of
> containers/microservices where it is about building stateless 12factor apps
> except databases. Why moving a service when you can just kill it and let the
> work be done by 10 other containers doing the same? I remember a talk on
> dockercon about containers and live migration. It was like: „And now where
> you know how to do it, dont’t do it!“
>
>
>
> Von: Avinash Sridharan 
> [mailto:avin...@mesosphere.io]
> Gesendet: Freitag, 19. Februar 2016 05:48
> An: user@mesos.apache.org
> Betreff: Re: Feature request: move in-flight containers w/o stopping them
>
>
>
> One problem with implementing something like vMotion for Mesos is to address
> seamless movement of network connectivity as well. This effectively requires
> moving the IP address of the container across hosts. If the container shares
> host network stack, this won't be possible since this would imply moving the
> host IP address from one host to another. When a container has its network
> namespace, attached to the host, using a bridge, moving across L2 segments
> might be a possibility. To move across L3 segments you will need some form
> of overlay (VxLAN maybe ?) .
>
>
>
> On Thu, Feb 18, 2016 at 7:34 PM, Jay Taylor 
> > wrote:
>
> Is this theoretically feasible with Linux checkpoint and restore, perhaps
> via CRIU?http://criu.org/Main_Page
>
>
> On Feb 18, 2016, at 4:35 AM, Paul Bell 
> > wrote:
>
> Hello All,
>
>
>
> Has there ever been any consideration of the ability to move in-flight
> containers from one Mesos host node to another?
>
>
>
> I see this as analogous to VMware's "vMotion" facility 

RE: ansible modules?

2016-02-08 Thread Aaron Carey
https://github.com/udacity/ansible-marathon
https://github.com/AnsibleShipyard/ansible-marathon




From: Antonio Fernandez [antonio.fernan...@bq.com]
Sent: 08 February 2016 15:08
To: user@mesos.apache.org
Subject: Re: ansible modules?

René,

take a look on this repo from CiscoCloud:

https://github.com/CiscoCloud/microservices-infrastructure

There are already many ansible modules there, but probably something can be 
missing.

Hope it helps.

On Mon, 8 Feb 2016 at 15:42 Rene Moser 
> wrote:
Hi

Has anyone already built ansible modules for marathon's and chronos' api
and wants to share?

If not, I would like to start with it. Any help is welcome

René
--
[https://static-bq.s3.amazonaws.com/bqcom/email-signature/bq-brand-band.png]

Antonio Fernández Vara

(SW Devops Leader)

+34 91 787 67 07  Ext: 1850

Calle Dublin, 1, Planta Primera

Ed. Sevilla.

Európolis

28232 Las Rozas - Madrid

bq.com

  
[https://static-bq.s3.amazonaws.com/bqcom/email-signature/twitter.jpg] 

[https://static-bq.s3.amazonaws.com/bqcom/email-signature/google-plus.jpg] 

[https://static-bq.s3.amazonaws.com/bqcom/email-signature/youtube.jpg] 

[https://static-bq.s3.amazonaws.com/bqcom/email-signature/instagram.jpg] 




[https://afc74830-a-cb5d6806-s-sites.googlegroups.com/a/booqreaders.com/firma-lopd/home/image.png?attachauth=ANoY7cqdN9sOWErY_qRV3DzzBAR9L4S-P2DRmuTFYUcpLvaGuDa407MB0rp2wyYyA4xhrDD-9qsn0N3p7An0ZX85EjKkzEz14IQrR2CK3BQIpvS7VsIpvVp4EUI0QfPE4jIWHICjxAchRVucGzqHXudMWV0UT02Hl1YXt7aSqngTYy8MHWQPK0ZhoQbbElrxX9P2gjEAL85WTY0wWIoVR1Mod31YvbkYjA%3D%3D=0]
 Nos encantan los árboles. No me imprimas si no es necesario.

Protección de Datos: Mundo Reader S.L. le informa de que los datos personales 
facilitados por Ud. y utilizados para el envío de esta comunicación serán 
objeto de tratamiento automatizado o no en nuestros ficheros, con la finalidad 
de gestionar la agenda de contactos de nuestra empresa y para el envío de 
comunicaciones profesionales por cualquier medio electrónico o no. Puede 
consultar en www.bq.com los detalles de nuestra Política de 
Privacidad y dónde ejercer el derecho de acceso, rectificación, cancelación y 
oposición.

Confidencialidad: Este mensaje contiene material confidencial y está dirigido 
exclusivamente a su destinatario. Cualquier revisión, modificación o 
distribución por otras personas, así como su reenvío sin el consentimiento 
expreso está estrictamente prohibido. Si usted no es el destinatario del 
mensaje, por favor, comuníqueselo al emisor y borre todas las copias de forma 
inmediata. Confidentiality: This e-mail contains material that is confidential 
for de sole use of de intended recipient. Any review, reliance or distribution 
by others or forwarding without express permission is strictly prohibited. If 
you are not the intended recipient, please contact the sender and delete all 
copies.


RE: Custom python executor with Docker

2015-11-10 Thread Aaron Carey
Yeah.. it'd be nice to do it natively though :)


From: Plotka, Bartlomiej [bartlomiej.plo...@intel.com]
Sent: 10 November 2015 15:38
To: user@mesos.apache.org
Subject: RE: Custom python executor with Docker

This is somehow possible using Kubernetes over Mesos: 
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos.md

Kind Regards,
Bartek Plotka

From: Aaron Carey [mailto:aca...@ilm.com]
Sent: Tuesday, November 10, 2015 4:33 PM
To: user@mesos.apache.org
Subject: RE: Custom python executor with Docker

We would also be interested in some sort of standardised DockerExecutor which 
would allow us to add pre and post launch steps.

Also having the ability to run two containers together as one task would be 
very useful (ie on the same host and linked together)

From: Tom Fordon [tom.for...@gmail.com]
Sent: 12 August 2015 00:28
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Custom python executor with Docker
We ended up implementing a solution where we did the pre/post steps as separate 
mesos tasks and adding logic to our scheduler to ensure they were run on the 
same machine.  If anybody knows of a standard / openly available DockerExecutor 
like what is described below, my team would be greatly interested.

On Fri, Aug 7, 2015 at 4:01 AM, Kapil Malik 
<kma...@adobe.com<mailto:kma...@adobe.com>> wrote:
Hi,

We have a similar usecase while running multi-user workloads on mesos. Users 
provide docker images encapsulating application logic, which we (we = say some 
“Central API”) schedule on Chronos / Marathon. However, we need to run some 
standard pre / post steps for every docker submitted by users. We have 
following options –


1.   Ask every user to embed their logic inside a pre-defined docker 
template which will perform pre/post steps.

==> This is error prone, makes us dependent on whether the users followed 
template, and not very popular with users either.



2.   Extend every user docker (FROM <>) and find a way to add pre-post 
steps in our docker. Refer this docker when scheduling on chronos / marathon.

==> Building new dockers does not scale as users and applications grow



3.   Write a custom executor which will perform the pre-post steps and 
manage the user docker lifetime.

==> Deals with user docker lifetime and is obviously complex.

Is there a standard / openly available DockerExecutor which manages the docker 
lifetime and which I can extend to build my custom executor? This way I will be 
concerned only with my custom logic (pre/post steps) and still get benefits of 
a standard way to manage docker containers.

Btw, thanks for the meaningful discussion below, it is very helpful.

Thanks and regards,

Kapil Malik | kma...@adobe.com<mailto:kma...@adobe.com> | 33430 / 8800836581

From: James DeFelice 
[mailto:james.defel...@gmail.com<mailto:james.defel...@gmail.com>]
Sent: 09 April 2015 18:12
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Custom python executor with Docker

If you can run the pre/post steps in a container then I'd recommend building a 
Docker image that includes your pre/post step scripting + your algorithm and 
launching it using the built-in mesos Docker containerizer. It's much simpler 
than managing the lifetime of the Docker container yourself.

On Thu, Apr 9, 2015 at 8:29 AM, Tom Fordon 
<tom.for...@gmail.com<mailto:tom.for...@gmail.com>> wrote:
Thanks for all the responses, I really appreciate the help.  Let me try to 
state my problem more clearly

Our project is performing file-based data processing.  I would like to keep the 
actual algorithm as contained as possible since we are in an R setting and 
will be getting untested code.  We have some pre/post steps that need to be run 
on the same box as the actual algorithm: downloading/uploading files and 
database calls.

We can run the pre/post steps and algorithm within the same container.  The 
algorithm will be a little less contained, but it will work.

Docker letting you specify a cgroup parent is really exciting.  If I invoke a 
docker container with the executor as the cgroup-parent are there any other 
steps I need to perform?  Would I need to do anything special to make mesos 
aware of the resource usage, or is that handled since the docker process would 
be in the executors cgroup?

Thanks again,
Tom

On Tue, Apr 7, 2015 at 8:10 PM, Timothy Chen 
<tnac...@gmail.com<mailto:tnac...@gmail.com>> wrote:
Hi Tom(s),

Tom Arnfeld is right, if you want to launch your own docker container
in your custom executor you will have to handle all the issues
yourself and not able to use the Docker containerizer at all.

Alternatively, you can actually launch your custom executor in a
Docker container by Mesos, by specifying the ContainerInfo in the
ExecutorInfo.
What this means is that your cu

RE: Custom python executor with Docker

2015-11-10 Thread Aaron Carey
We would also be interested in some sort of standardised DockerExecutor which 
would allow us to add pre and post launch steps.

Also having the ability to run two containers together as one task would be 
very useful (ie on the same host and linked together)


From: Tom Fordon [tom.for...@gmail.com]
Sent: 12 August 2015 00:28
To: user@mesos.apache.org
Subject: Re: Custom python executor with Docker

We ended up implementing a solution where we did the pre/post steps as separate 
mesos tasks and adding logic to our scheduler to ensure they were run on the 
same machine.  If anybody knows of a standard / openly available DockerExecutor 
like what is described below, my team would be greatly interested.

On Fri, Aug 7, 2015 at 4:01 AM, Kapil Malik 
> wrote:
Hi,

We have a similar usecase while running multi-user workloads on mesos. Users 
provide docker images encapsulating application logic, which we (we = say some 
“Central API”) schedule on Chronos / Marathon. However, we need to run some 
standard pre / post steps for every docker submitted by users. We have 
following options –


1.   Ask every user to embed their logic inside a pre-defined docker 
template which will perform pre/post steps.

==> This is error prone, makes us dependent on whether the users followed 
template, and not very popular with users either.



2.   Extend every user docker (FROM <>) and find a way to add pre-post 
steps in our docker. Refer this docker when scheduling on chronos / marathon.

==> Building new dockers does not scale as users and applications grow



3.   Write a custom executor which will perform the pre-post steps and 
manage the user docker lifetime.

==> Deals with user docker lifetime and is obviously complex.

Is there a standard / openly available DockerExecutor which manages the docker 
lifetime and which I can extend to build my custom executor? This way I will be 
concerned only with my custom logic (pre/post steps) and still get benefits of 
a standard way to manage docker containers.

Btw, thanks for the meaningful discussion below, it is very helpful.

Thanks and regards,

Kapil Malik | kma...@adobe.com | 33430 / 8800836581

From: James DeFelice 
[mailto:james.defel...@gmail.com]
Sent: 09 April 2015 18:12
To: user@mesos.apache.org
Subject: Re: Custom python executor with Docker

If you can run the pre/post steps in a container then I'd recommend building a 
Docker image that includes your pre/post step scripting + your algorithm and 
launching it using the built-in mesos Docker containerizer. It's much simpler 
than managing the lifetime of the Docker container yourself.

On Thu, Apr 9, 2015 at 8:29 AM, Tom Fordon 
> wrote:
Thanks for all the responses, I really appreciate the help.  Let me try to 
state my problem more clearly

Our project is performing file-based data processing.  I would like to keep the 
actual algorithm as contained as possible since we are in an R setting and 
will be getting untested code.  We have some pre/post steps that need to be run 
on the same box as the actual algorithm: downloading/uploading files and 
database calls.

We can run the pre/post steps and algorithm within the same container.  The 
algorithm will be a little less contained, but it will work.

Docker letting you specify a cgroup parent is really exciting.  If I invoke a 
docker container with the executor as the cgroup-parent are there any other 
steps I need to perform?  Would I need to do anything special to make mesos 
aware of the resource usage, or is that handled since the docker process would 
be in the executors cgroup?

Thanks again,
Tom

On Tue, Apr 7, 2015 at 8:10 PM, Timothy Chen 
> wrote:
Hi Tom(s),

Tom Arnfeld is right, if you want to launch your own docker container
in your custom executor you will have to handle all the issues
yourself and not able to use the Docker containerizer at all.

Alternatively, you can actually launch your custom executor in a
Docker container by Mesos, by specifying the ContainerInfo in the
ExecutorInfo.
What this means is that your custom executor is already running in a
docker container, and you can do your custom logic afterwards. This
does means you can simply just launch multiple containers in the
executor anymore.

If there is something you want to do and doesnt' fit these let us know
what you're trying to achieve and we can see what we can do.

Tim

On Tue, Apr 7, 2015 at 4:15 PM, Tom Arnfeld 
> wrote:
> It's not possible to invoke the docker containerizer from outside of Mesos,
> as far as I know.
>
> If you persue this route, you can run into issues with orphaned containers
> as your executor may die for some unknown reason, and the container is still
> 

RE: Docker Private Registry problem when update from 0.23 to 0.25

2015-10-23 Thread Aaron Carey
Ah interesting.. I reported exactly the same problem with NFS the other day!

@Luke: we also had some issues with 1.8.0, 1.8.3 seems to have fixed things for 
us though!



From: craig w [codecr...@gmail.com]
Sent: 23 October 2015 14:54
To: user@mesos.apache.org
Subject: Re: Docker Private Registry problem when update from 0.23 to 0.25

@Luke - Good to know. I have a similar setup, only different being our timeout 
is at 5 minutes. We had problems using docker 1.8.x with a private registry 
that was backed by NFS, apparently we're not alone 
(https://github.com/docker/docker/issues/15833).

On Fri, Oct 23, 2015 at 9:52 AM, Luke Amdor 
> wrote:
We currently run Mesos 0.25.0 with Marathon 0.11.1 with a private docker 
registry (distribution v2.1.1, no creds required) and haven't had any weird 
problems running docker containers. We however do run our marathon with a 
really high task_launch_confirm_timeout of 60 (10 minutes) as we were 
running into problems with docker pulls and staged tasks. We also run docker 
version 1.7.1 as we ran into problems with pulls on docker 1.8.0. We haven't 
yet tried the newest docker release to see if it's helped.

On Fri, Oct 23, 2015 at 6:50 AM, craig w 
> wrote:
I wasn't sure because of this comment: 
https://github.com/mesosphere/marathon/pull/2462#issuecomment-148703383

On Fri, Oct 23, 2015 at 7:41 AM, haosdent 
> wrote:
According https://github.com/mesosphere/marathon/pull/2462 , I think the 
patches merge into 0.12.0-RC1 and 0.11.1 doesn't contain yet.

On Fri, Oct 23, 2015 at 7:32 PM, craig w 
> wrote:
Has anyone confirmed that Mesos 0.25.0 with Marathon 0.11.1 with a private 
docker registry (without credentials) works?

On Thu, Oct 15, 2015 at 4:51 PM, Jan Stabenow 
> 
wrote:
Marathon 0.11.1 + Mesos 0.25.0 + private Docker-Reg (without credentials) 
works, but with a new error. Marathon (or Mesos) finished each task with an 
"finished-state" immediately.

So let’s wait… 
https://github.com/mesosphere/marathon/pull/2462#issuecomment-148482471


Am 15.10.2015 um 21:29 schrieb Jan Stabenow 
>:

Hey Brian,

sorry for my late response.

I'll try the latest 0.11.1.

This may be one of the problems:
https://github.com/mesosphere/marathon/pull/2415


Am 15.10.2015 um 21:14 schrieb craig w 
>:


I've successfully pulled images from a private registry using Mesos 0.24.1 and 
marathon 0.11. I use forcePullImage: true.

On Oct 15, 2015 2:25 PM, "Brian Devins" 
> wrote:
Has anyone found some resolution for this?

I have the mesosphere repo versions of mesos and marathon. 0.24.1 and 0.11.0 
respectively. I have also tried 0.10.1 for marathon but still no dice. I can 
manually pull the images if I log into the box.

On Wed, Oct 14, 2015 at 2:30 AM, Joachim Andersson 
> wrote:

Those images I have tried with Marathon 0.11

I use "disk": 512,


Here is my app config.


{
"id": "/marathonkempaffb060830d6ad074c66923b77f75e199a4c8a23",
"env": {
"GIT_CALCULATED_BRANCH": "affb060830d6ad074c66923b77f75e199a4c8a23",
"marathonkemp_kemp_password": "!12345!m",
},
"instances": 1,
"cpus": 0.5,
"mem": 1024,
"disk": 512,
"executor": "",
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": 
"dockerregistry.dte.loc:80/marathonkemp:affb060830d6ad074c66923b77f75e199a4c8a23",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"servicePort": 1,
"protocol": "tcp"
},
{
"containerPort": 8000,
"hostPort": 0,
"servicePort": 10001,
"protocol": "tcp"
},
{
"containerPort": 9010,
"hostPort": 0,
"servicePort": 10002,
"protocol": "tcp"
}
],
"privileged": false,
"parameters": [],
"forcePullImage": false
}
},
"healthChecks": [
{

RE: Mesos slave in docker container

2015-09-30 Thread Aaron Carey
We run both our Master and Agent processes as docker containers.. it works well 
although we don't have strict security requirements..


From: Krish [krishnan.k.i...@gmail.com]
Sent: 30 September 2015 13:58
To: user@mesos.apache.org
Subject: Mesos slave in docker container


I see that we can run mesos-slave in a privileged docker container. I also see 
tutorials online for guidance.
However, I am curious to know the pros & cons of such an approach.

Pros: Containerization helps, & can help in running on various server distros.
Cons: Security is one. Any way to solve it?

Are there any others that I am unaware of?

Thanks.

--
κρισhναν
n00b on mesos



RE: Metric for tasks queued/waiting?

2015-09-24 Thread Aaron Carey
Thanks Alex,

The problem here is more along the lines of getting the metrics to feed into 
the algorithm, rather than the algorithm itself. Relay looks very cool though 
thanks :)

Aaron


From: Alex Gaudio [adgau...@gmail.com]
Sent: 23 September 2015 21:54
To: user@mesos.apache.org
Subject: Re: Metric for tasks queued/waiting?


Hi Aaron,

You might consider trying to solve the autoscaling problem with Relay, a Python 
tool I use to solve this problem.  Feel free to shoot me an email if you are 
interested.

github.com/sailthru/relay<http://github.com/sailthru/relay>

Alex

On Wed, Sep 23, 2015, 11:03 AM David Greenberg 
<dsg123456...@gmail.com<mailto:dsg123456...@gmail.com>> wrote:
In addition, this technique could be implemented in the allocator with an 
understanding of global demand: https://www.youtube.com/watch?v=BkBMYUe76oI

That would allow for tunable fair-sharing based on DRF-principles.

On Wed, Sep 23, 2015 at 10:59 AM haosdent 
<haosd...@gmail.com<mailto:haosd...@gmail.com>> wrote:

Feel free to open a story in jira if you think you ideas are awesome. :-)

On Sep 23, 2015 10:54 PM, "Sharma Podila" 
<spod...@netflix.com<mailto:spod...@netflix.com>> wrote:
Ah, OK, thanks. Yes, Fenzo is a Java library.

It might be a nice addition to Mesos master to get a global view of contention 
for resources. In addition to autoscaling, it would be useful in the allocator.



On Wed, Sep 23, 2015 at 7:29 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Thanks Sharma,

I was in the audience for a talk you did about Fenzo at MesosCon :) It looked 
great but we're a python shop primarily so the Java requirement would be a 
problem for us.

The scaling in the scheduler makes total sense, (obvious when you think about 
it!), I was naively hoping for some sort of knowledge of that back in the Mesos 
master as we were hoping to have scaling be independent of schedulers. I think 
this'll need a re-think!

Thanks for your help!

Aaron


From: Sharma Podila [spod...@netflix.com<mailto:spod...@netflix.com>]
Sent: 23 September 2015 15:22

To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Metric for tasks queued/waiting?

Jobs/tasks wait in framework schedulers, not mesos master. Autoscaling triggers 
must come from schedulers, not only because that's who knows the pending task 
set size, but, also because it knows how many of them need to be launched right 
away, on what kind of machines.

We built such an autoscaling capability in our framework schedulers. The 
autoscaling is achieved by our library Fenzo<https://github.com/Netflix/Fenzo> 
which we open sourced recently. Also read about Fenzo autoscaling 
here<https://github.com/Netflix/Fenzo/wiki/Autoscaling>. You should look into 
using that if you are developing your own scheduler. Or, have your scheduler 
team pick up Fenzo for autoscaling.

Also, note that scaling up is temptingly easy by watching the pending task 
queue. But, scaling down requires bin packing, etc. Other issues pop up as 
well, for example:

- what if a user submits tasks that cannot be satisfied? Will autoscale keep 
increasing the cluster size unbounded?
- what if you would like to have a heterogeneous mix of hosts and tasks? which 
kind of hosts do you need to autoscale based on which tasks are pending?

These are automatically addressed in Fenzo.

Sharma


On Wed, Sep 23, 2015 at 4:56 AM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
No, I basically had the same question as Jim (but maybe didn't word it so well 
;))

I'll have a look at your response there :)


From: haosdent [haosd...@gmail.com<mailto:haosd...@gmail.com>]
Sent: 23 September 2015 10:12
To: user@mesos.apache.org<mailto:user@mesos.apache.org>
Subject: Re: Metric for tasks queued/waiting?

Does /metrics/snapshot not satisfy your requirement?

On Wed, Sep 23, 2015 at 4:50 PM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Hi all,

Is there any way to get a metric of all tasks currently waiting/queued in Mesos 
(across all schedulers)? The snapshot metrics seem to cover ever other kind of 
task state? This would be quite useful for auto-scaling purposes..

Thanks,
Aaron



--
Best Regards,
Haosdent Huang




RE: Metric for tasks queued/waiting?

2015-09-23 Thread Aaron Carey
No, I basically had the same question as Jim (but maybe didn't word it so well 
;))

I'll have a look at your response there :)


From: haosdent [haosd...@gmail.com]
Sent: 23 September 2015 10:12
To: user@mesos.apache.org
Subject: Re: Metric for tasks queued/waiting?

Does /metrics/snapshot not satisfy your requirement?

On Wed, Sep 23, 2015 at 4:50 PM, Aaron Carey 
<aca...@ilm.com<mailto:aca...@ilm.com>> wrote:
Hi all,

Is there any way to get a metric of all tasks currently waiting/queued in Mesos 
(across all schedulers)? The snapshot metrics seem to cover ever other kind of 
task state? This would be quite useful for auto-scaling purposes..

Thanks,
Aaron



--
Best Regards,
Haosdent Huang


RE: Can mesos provide high availability for any generic app/framework on cassandra cluster?

2015-08-17 Thread Aaron Carey
I have limited knowledge of Azkaban/Spark, but this sounds like a good fit for 
the Chronos (or Aurora) schedulers.

Mesos itself doesn't provide the scheduling logic (ie the dependency 
information, the 'run this task at x time' info etc). Mesos is the framework, 
which Chronos or Aurora run on as schedulers, you'd then submit your jobs to 
Chronos which would schedule them across your mesos cluster.



From: Vikram Kone [vikramk...@gmail.com]
Sent: 17 August 2015 22:46
To: user@mesos.apache.org
Cc: Vinod Kone
Subject: Can mesos provide high availability for any generic app/framework on 
cassandra cluster?


Hi.
I'm looking at existing open source workflow engines we can use for scheduling 
spark jobs with intricate dependencies on a datastax cassandra cluster. 
Currently we  are using crontab to schedule jobs and want to move to something 
which is more robust and highly available.
There are 2 main problems with cron on cassandra
​as we have today
1. Single point of failure: Our cron tasks that do spark-submit run on a single 
machine and if that machine goes down in the cluster, all the jobs are kaput 
till the node comes back up.
2. Can't easily specify job dependency between cron tasks to model a DAG.

One of the
​Work flow ​
engines I'm looking at is
​A​
zkaban where
​ ​
job authoring and dependency config is easy
​ via Web UI and REST APIs​
. But it
​also
has a single point of failure for azkaban master . I'm also open to run the 
workflow engine on a separate cluster but since spark doesn't allow remote job 
submission
​s natively​
, we are stuck with running workflow engine on the same cassandra cluster.

​High availability of Spark master is taken care of in the Datastax's version 
of cassandra. So all I need to do is provide HA for Azkaban. Is mesos the right 
tool for this?

I can either go Spark + Mesos + Zookeper  if...

 Mesos provides the ability to configure jobs with dependencies ie run job A 
after Job and job C are finished.

Or go with
Spark + Azkaban + zookeper if..
Mesos doesn't provide job dependency features.

Advice?

thx



RE: MesosCon Seattle attendee introduction thread

2015-08-17 Thread Aaron Carey
Hi All,

I'm Aaron and I work as a production/rd engineer at Industrial Light and 
Magic. We've been experimenting with Mesos and Docker over the last 4 or 5 
months for a variety of purposes.

I'm really looking forward to hearing more on scheduling algorithms and per 
container IP applications. I'm also interested in seeing how people approach 
storage on a Mesos cluster!

Aaron


From: Nic Grayson [nic.gray...@banno.com]
Sent: 17 August 2015 18:48
To: user@mesos.apache.org
Subject: Re: MesosCon Seattle attendee introduction thread

Hi,

I'm Nic Grayson, Software Engineer at Banno/Jack Henry  Associates. I'm 
excited to return to mesoscon this year. I'll be bringing more of our team with 
me this year, 7 in total.

We've been hard at work automating deployments with terraform, marathon, and 
mesos. I’m excited to see the progress all of the major frameworks have made 
over the last year. We are now using terraform to interact with the kafka 
framework api (http://nicgrayson.com/mesos-kafka-terraform/)

Nic

On Mon, Aug 17, 2015 at 12:20 PM, Sharma Podila 
spod...@netflix.commailto:spod...@netflix.com wrote:
Hello Everyone,

I am Sharma Podila, senior software engineer at Netflix. It is exciting to be a 
part of MesosCon again this year.
We developed a cloud native Mesos framework to run a mix of service, batch, and 
stream processing workloads. To which end we created a reusable plug-ins based 
scheduling library, Fenzo. I am looking forward to presenting an in-depth look 
on Thurs at 2pm about how we achieve scheduling objectives and cluster 
autoscaling, as well as share some of our results with you.

I am interested in learning about and collaborating with you all regarding 
scheduling and framework development.

Sharma



On Mon, Aug 17, 2015 at 2:11 AM, Ankur Chauhan 
an...@malloc64.commailto:an...@malloc64.com wrote:
Hi all,

I am Ankur Chauhan. I am a Sr. Software engineer with the Reporting and 
Analytics team
at Brightcove Inc. I have been evaluating, tinkering, developing with mesos for 
about an year
now. My latest adventure has been in the spark mesos integration and writing 
the new apache flink -
mesos integration.

I am interested in learning about managing stateful services in mesos and 
creating better documentation
for the project.

I am very excited to meet everyone!

-- Ankur Chauhan.

 On 17 Aug 2015, at 00:10, Trevor Powell 
 trevor.pow...@rms.commailto:trevor.pow...@rms.com wrote:

 Hey Mesos Family! Can¹t wait to see you all in person.

 I¹m Trevor Powell. I am the Product Owner for our TechOps engineering team
 at RMS. RMS is in the catastrophic modeling business. Think of it as
 modeling Acts of God (earthquakes, floods, Godzilla, etc)  on physical
 property and damages associated with them.

 We¹ve been evaluating Mesos this year, and we are planning to launch it in
 PRD at the start of next. I am super excited :-)

 I am very interested in managing stateful applications inside Mesos. Also
 network segmentation in Mesos (see my ³Mesos, Multinode Workload Network
 segregation² email thread earlier this month).

 See you all Thursday!!

 Stay Smooth,

 --

 Trevor Alexander Powell
 Sr. Manager, Cloud Engineer  Architecture
 7575 Gateway Blvd. Newark, CA 94560
 T: +1.510.713.3751tel:%2B1.510.713.3751
 M: +1.650.325.7467tel:%2B1.650.325.7467
 www.rms.comhttp://www.rms.com
 https://www.linkedin.com/in/trevorapowell

 https://github.com/tpowell-rms






 On 8/16/15, 1:58 PM, Dave Lester 
 d...@davelester.orgmailto:d...@davelester.org wrote:

 Hi All,

 I'd like to kick off a thread for folks to introduce themselves in
 advance of #MesosCon
 http://events.linuxfoundation.org/events/mesoscon. Here goes:

 My name is Dave Lester, and I'm an Open Source Advocate at Twitter. I am
 a member of the MesosCon program committee, along with a stellar group
 of other community members who have volunteered
 http://events.linuxfoundation.org/events/mesoscon/program/programcommitte
 e.
 Can't wait to meet as many of you as possible.

 I'm eager to meet with folks interested in learning more about how we
 deploy and manage services at Twitter using Mesos and Apache Aurora
 http://aurora.apache.org. Twitter has a booth where I'll be hanging
 out for a portion of the conference, feel free to stop by and say hi.
 I'm also interested in connecting with companies that use Mesos; let's
 make sure we add you to our #PoweredByMesos list
 http://mesos.apache.org/documentation/latest/powered-by-mesos/.

 I'm also on Twitter: @davelester

 Next!






RE: Custom executor

2015-07-29 Thread Aaron Carey
ah cool! Will that run as one instance per task, or one scheduler per slave?



From: Connor Doyle [connor@gmail.com]
Sent: 29 July 2015 17:24
To: user@mesos.apache.org
Subject: Re: Custom executor

You don't even have to pre-load the executor on the slave boxes -- just add it 
as a URL and it will be downloaded to the sandbox like any other resource!

On Jul 29, 2015, at 02:47, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:

Ah I see.. so is it simply a case of making the executor file executable, 
putting it on the slave, and supplying the path to it in the JSON?

Thanks!

Aaron


From: Ondrej Smola [ondrej.sm...@gmail.commailto:ondrej.sm...@gmail.com]
Sent: 29 July 2015 10:13
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Custom executor

Hi Aaron,

custom executor should be supported by Marathon - i dont use it but from tests 
in

https://github.com/mesosphere/marathon/blob/master/src/test/scala/mesosphere/mesos/TaskBuilderTest.scala#L236

there is a option to specify path to custom executor.

https://mesosphere.github.io/marathon/docs/rest-api.html#post-/v2/apps

in task definition there is executor json prop

Chronos also supports this property


Download/create some simple executor and try to test it.




2015-07-29 11:00 GMT+02:00 Aaron Carey aca...@ilm.commailto:aca...@ilm.com:
Hi Tim,

We have some specific requirements for moving data around when executing tasks 
on slaves, I want to be able to 'check out' a selection of files, and possibly 
mount filesystems onto the slave (and subsequently into the executing docker 
container). The data required by each task is specified in our database.

Basically I wanted to customise an executor to prepare the data on the slave 
before executing the docker container, rather than having to get the container 
to download its own data or attempt to mount NFS volumes itself.

I hope that all makes sense, I couldn't find a simple solution to this using 
the existing architecture.. I'd love to know your thoughts though!

Thanks,
Aaron


From: Tim Chen [t...@mesosphere.iomailto:t...@mesosphere.io]
Sent: 28 July 2015 19:01
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Custom executor

Can you explain what your motivations are and what your new custom executor 
will do?

Tim

On Tue, Jul 28, 2015 at 5:08 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
Hi,

Is it possible to build a custom executor which is not associated with a 
particular scheduler framework? I want to be able to write a custom executor 
which is available to multiple schedulers (eg Marathon, Chronos and our own 
custom scheduler). Is this possible? I couldn't quite figure out the best way 
to go about this from the docs? Is it possible to mix and match languages for 
schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron




RE: Custom executor

2015-07-29 Thread Aaron Carey
Hi Tim,

We have some specific requirements for moving data around when executing tasks 
on slaves, I want to be able to 'check out' a selection of files, and possibly 
mount filesystems onto the slave (and subsequently into the executing docker 
container). The data required by each task is specified in our database.

Basically I wanted to customise an executor to prepare the data on the slave 
before executing the docker container, rather than having to get the container 
to download its own data or attempt to mount NFS volumes itself.

I hope that all makes sense, I couldn't find a simple solution to this using 
the existing architecture.. I'd love to know your thoughts though!

Thanks,
Aaron


From: Tim Chen [t...@mesosphere.io]
Sent: 28 July 2015 19:01
To: user@mesos.apache.org
Subject: Re: Custom executor

Can you explain what your motivations are and what your new custom executor 
will do?

Tim

On Tue, Jul 28, 2015 at 5:08 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
Hi,

Is it possible to build a custom executor which is not associated with a 
particular scheduler framework? I want to be able to write a custom executor 
which is available to multiple schedulers (eg Marathon, Chronos and our own 
custom scheduler). Is this possible? I couldn't quite figure out the best way 
to go about this from the docs? Is it possible to mix and match languages for 
schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron



RE: Custom executor

2015-07-29 Thread Aaron Carey
Hi All,

haosdent says that using a custom executor in Chronos and Marathon would 
require changing their code, but Sargun Dhilon suggests that this isn't 
necessary..

Anyone know which is correct? Perhaps with an example?

Thanks!

Aaron



From: haosdent [haosd...@gmail.com]
Sent: 28 July 2015 17:28
To: user@mesos.apache.org
Subject: Re: Custom executor

Hi, @Araon If you want to develop your custom framework, you could checkout 
this document 
https://github.com/apache/mesos/blob/master/docs/app-framework-development-guide.md
 first.
 I want to be able to write a custom executor which is available to multiple 
 schedulers (eg Marathon, Chronos and our own custom scheduler). Is this 
 possible?

If you want to write a executor used in Marathon/Chronos, you need change their 
code. I think this is difficult and not suggest.

 Is it possible to mix and match languages for schedulers and executors? (ie 
 one is python one is C++)

Yes, could use different languages for different components. Just need 
implement the interfaces and make sure the executor could run in slaves.


On Tue, Jul 28, 2015 at 8:08 PM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
Hi,

Is it possible to build a custom executor which is not associated with a 
particular scheduler framework? I want to be able to write a custom executor 
which is available to multiple schedulers (eg Marathon, Chronos and our own 
custom scheduler). Is this possible? I couldn't quite figure out the best way 
to go about this from the docs? Is it possible to mix and match languages for 
schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron



--
Best Regards,
Haosdent Huang


Custom executor

2015-07-28 Thread Aaron Carey
Hi,

Is it possible to build a custom executor which is not associated with a 
particular scheduler framework? I want to be able to write a custom executor 
which is available to multiple schedulers (eg Marathon, Chronos and our own 
custom scheduler). Is this possible? I couldn't quite figure out the best way 
to go about this from the docs? Is it possible to mix and match languages for 
schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron


RE: Cluster of Workstations type design for a Mesos cluster

2015-07-21 Thread Aaron Carey
There's nothing stopping you running the mesos master and slave process on the 
same machine, so you could run the master process on your non-desktop machine 
if you're worried.

We have the master and slave processes run as docker containers and they can 
both end up on the same machine without any problems.


From: Gaston, Dan [dan.gas...@nshealth.ca]
Sent: 21 July 2015 14:44
To: 'user@mesos.apache.org'
Subject: RE: Cluster of Workstations type design for a Mesos cluster

Is there likely to be any issues with the Master? Given it would be an active 
desktop it would be running all of the typical mesos master stuff, plus say an 
active Ubuntu desktop environment. It would also need to host things like a 
local Docker registry and the like as well, since the compute nodes wouldn’t 
have direct access to the wider internet.

From: jeffschr...@gmail.com [mailto:jeffschr...@gmail.com] On Behalf Of Jeff 
Schroeder
Sent: Tuesday, July 21, 2015 10:42 AM
To: user@mesos.apache.org
Subject: Re: Cluster of Workstations type design for a Mesos cluster

As far as mesos is concerned, compute is a commodity. This should work just 
fine. Put Aurora or Marathon ontop of mesos if you need a general purpose 
scheduler and you're good to go. The nice thing is that you can add additional 
slaves as you need. I believe heterogeneous clusters are best if possible, but 
absolutely not a requirement of any sort.

On Tuesday, July 21, 2015, Gaston, Dan 
dan.gas...@nshealth.camailto:dan.gas...@nshealth.ca wrote:
Let’s say I had 2 high-performance workstations kicking around (dual 6-core, 
2.4GHz, xeon processors; 128 GB RAM each; etc) and a smaller workstation 
(single Xeon 4-core, 3.5GHz and 16 GB RAM) available and I wanted to cluster 
them together with Mesos. What is the best way of doing this? My thought was 
that the smaller workstation would be at my desk (the other two would be in the 
same office) because it would be used for development work and some general 
tasks but would also be the master node of the mesos cluster (note that HA 
isn’t a requirement here). This workstation would have two NICs, one connected 
to our institutional network and the other making up the private network 
between the clusters.

Is this even doable? Normally you would have some sort of client submitting to 
the Master but in this case the Master node would be serving up multiple roles. 
The other workstations would probably not have access to the institutional 
network, so all software updates and the like would have to be piped through 
the master workstation. There would also be a relatively large NAS device 
connected into this network as well.

Thoughts and suggestions welcome, even if it is to tell me I’m crazy. I’m 
building a small scale compute “cluster” that is fairly limited by budget (and 
the needs aren’t high either) and it may not be able to be located in a 
datacenter, hence the cluster of workstations type setup.



[NSHA_colour_logo.jpg]

Dan Gaston, PhD
Clinical Laboratory Bioinformatician
Department of Pathology and Laboratory Medicine
Division of Hematopathology
Rm 511, 5788 University Ave.
Halifax, NS B3H 1V8





--
Text by Jeff, typos by iPhone


RE: service discovery in Mesos on CoreOS

2015-06-30 Thread Aaron Carey
+1 for mesos-consul

We've been using it to great effect!


From: Dave Lester [d...@davelester.org]
Sent: 30 June 2015 06:38
To: user@mesos.apache.org
Subject: Re: service discovery in Mesos on CoreOS

It would be great to have a documentation page devoted to compiling these 
different solutions to service discovery; if anyone wants create a new markdown 
file in docs/ and submit a pull request or review on Review Board, add me as a 
reviewer!

Dave

On Mon, Jun 29, 2015, at 08:19 PM, haosdent wrote:
Also have another service discovery tool. https://www.consul.io/ 
https://github.com/CiscoCloud/mesos-consul

On Tue, Jun 30, 2015 at 10:51 AM, zhou weitao 
zhouwtl...@gmail.commailto:zhouwtl...@gmail.com wrote:


2015-06-30 6:23 GMT+08:00 Andras Kerekes 
andras.kere...@ishisystems.commailto:andras.kere...@ishisystems.com:

Hi,


Is there a preferred way to do service discovery in Mesos via mesos-dns running 
on CoreOS? I’m trying to implement a simple app which consists of two docker 
containers and one of them (A) depends on the other (B). What I’d like to do is 
to tell container A to use a fix dns name (containerB.marathon.mesos in case of 
mesos-dns) to find the other service. There are at least 3 different ways I 
think it can be done, but the 3 I found all have some shortcomings.


1.Use SRV records to get the port along with the IP. Con: I’d prefer not to 
build the logic of handling SRV records into the app, it can be a legacy app 
that is difficult to modify

2.Use haproxy on slaves and connect via a well-known port on localhost. Cons: 
the Marathon provided script does not run on CoreOS, also I don’t know how to 
run haproxy on CoreOS outside of a docker container. If it is running in a 
docker container, then how can it dynamically allocate ports on localhost if a 
new service is discovered in Marathon/Mesos?


Do you know this repo? https://github.com/QubitProducts/bamboo . And here our 
corp one https://github.com/Dataman-Cloud/bamboo branched from the above.



3.Use dedicated port to bind the containers to. Con: I can have only as many 
instances of a service as many slaves I have because they bind to the same port.


What other alternatives are there?


Thanks,

Andras






--
Best Regards,
Haosdent Huang



RE: [DISCUSS] Renaming Mesos Slave

2015-06-04 Thread Aaron Carey
Thanks James,

Interesting background!

From: CCAAT [cc...@tampabay.rr.com]
Sent: 04 June 2015 14:05
To: user@mesos.apache.org
Cc: cc...@tampabay.rr.com
Subject: Re: [DISCUSS] Renaming Mesos Slave

On 06/04/2015 02:32 AM, Aaron Carey wrote:
 +1 to Itamar.

 I'd be interested to hear any case studies of how this has been handled
 in other OS projects with master/slave namings if anyone can give examples?

Sure it's easy to reasearch, just use 'master-slave' in your search
strings. Here is a good place to start:

http://en.wikipedia.org/wiki/Master/slave_(technology)


hth,
James



 
 *From:* Itamar Ostricher [ita...@yowza3d.com]
 *Sent:* 04 June 2015 05:38
 *To:* user@mesos.apache.org
 *Cc:* dev
 *Subject:* Re: [DISCUSS] Renaming Mesos Slave

 Strong -1 for changing the name (either master or slave).

  From a community stand point, if dev resources are diverted to renaming
 efforts, then the community and the user base both lose meaningful
 functionality that isn't being worked on.

  From a using organization stand point, as well as framework developer
 perspective, I follow mesos releases pretty closely, and I'm confident
 that the version that deprecates backward compatibility with the current
 names will be a version I will not be able to adopt for months, if at all...

 So please don't do that, or if you do, consider leaving in a
 configuration option to keep the current names for sane upgrades.



RE: Re:

2015-06-01 Thread Aaron Carey
Ah perfect! Thanks for the info!


From: Adam Bordelon [a...@mesosphere.io]
Sent: 01 June 2015 06:48
To: user@mesos.apache.org
Subject: Re:

FYI, Mesos will exclude 1GB from what it auto-detects, so that the mesos-slave 
process and other system processes can use some memory. See 
https://github.com/apache/mesos/blob/0.22.1/src/slave/containerizer/containerizer.cpp#L107
If you explicitly set the memory requirements as Ondrej suggests, you can 
override this. However, you run the risk of your tasks consuming all the memory 
in the system so that Mesos itself cannot run effectively.

On Thu, May 21, 2015 at 4:24 AM, Ondrej Smola 
ondrej.sm...@gmail.commailto:ondrej.sm...@gmail.com wrote:
It is little more complicated and it depends on your environment - you need to 
give some RAM to OS and running processes (Mesos, Docker etc.). Quick test - VM 
with 3GB RAM and Mesos offers 1.9G - so there should is no problem related to 
your mesos setup (mesos in both cases offers around 63% of RAM).

About manual setup: you can use some automation tool (Ansible, Puppet) if you 
plan to setup large number of nodes.



2015-05-21 13:10 GMT+02:00 Aaron Carey aca...@ilm.commailto:aca...@ilm.com:
Thanks Ondrej,

Do I have to do this? I was under the impression if you didn't specify the 
resources then mesos would just offer everything available?

Thanks,
Aaron


From: Ondrej Smola [ondrej.sm...@gmail.commailto:ondrej.sm...@gmail.com]
Sent: 21 May 2015 12:04
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject:

Hi Aaron,

You can set memory in /etc/mesos-slave/resources

example:

cpus(*):4;mem(*):16067;ports(*):[80-80,31000-32000]

with this configuration mesos offers 15.7GB RAM on one of our nodes.







2015-05-21 12:51 GMT+02:00 Aaron Carey aca...@ilm.commailto:aca...@ilm.com:
I've managed to increase the disksize by playing with some docker options,

Anyone have any idea about the memory?

Thanks,
Aaron


From: Aaron Carey [aca...@ilm.commailto:aca...@ilm.com]
Sent: 21 May 2015 11:19
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: How slaves calculate resources

Hi,

I was just trying to figure out how Mesos slaves report the amount of resources 
available to them on the host?

We have some slaves running on AWS t2.medium machines (2cpu, 4Gb RAM) with 32GB 
disks.

The slaves are running inside docker containers.

They report 2 cpus (correct), 2.5GB RAM and 4.9GB disk.

Any ideas why this is different from what I can see on the machine? (both on 
the host and within the slave docker container)?

Thanks,
Aaron





RE: How slaves calculate resources

2015-05-22 Thread Aaron Carey
That's very useful, thank you!

Aaron


From: Ian Downes [idow...@twitter.com]
Sent: 21 May 2015 18:20
To: user@mesos.apache.org
Subject: Re: How slaves calculate resources

You can specify the resources a slave should offer using the --resources flag. 
If unspecified, the slave determines (guesses) appropriate values. For memory 
it will call os::memory() as Alexander stated and, assuming memory is at least 
2 GB then it will leave 1 GB for the system and offer the rest, i.e., memory - 
1 GB. If memory is less than 2 GB it will offer 50%.

The relevant code from slave/containerizer/containerizer.cpp is:

  if (!strings::contains(flags.resources.get(), mem)) {
// No memory specified so probe OS or resort to DEFAULT_MEM.
Bytes mem;
Tryos::Memory mem_ = os::memory();
if (mem_.isError()) {
  LOG(WARNING)  Failed to auto-detect the size of main memory: '
 mem_.error()
 ' ; defaulting to DEFAULT_MEM;
  mem = DEFAULT_MEM;
} else {
  Bytes total = mem_.get().total;
  if (total = Gigabytes(2)) {
mem = total - Gigabytes(1); // Leave 1GB free.
  } else {
mem = Bytes(total.bytes() / 2); // Use 50% of the memory.
  }
}

On Thu, May 21, 2015 at 7:35 AM, Alexander Gallego 
agall...@concord.iomailto:agall...@concord.io wrote:
Basically all the info you need is in os.hpp in the stout lib of mesos.

Effectively, the cpus are just a syscall:

sysconf(_SC_NPROCESSORS_ONLN);


The memory on the other hand is calculated:


# if LINUX_VERSION_CODE = KERNEL_VERSION(2, 3, 23)
  memory.total = Bytes(info.totalram * info.mem_unit);
  memory.free = Bytes(info.freeram * info.mem_unit);
# else
  memory.total = Bytes(info.totalram);
  memory.free = Bytes(info.freeram);
# endif





On Thu, May 21, 2015 at 6:51 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
I've managed to increase the disksize by playing with some docker options,

Anyone have any idea about the memory?

Thanks,
Aaron


From: Aaron Carey [aca...@ilm.commailto:aca...@ilm.com]
Sent: 21 May 2015 11:19
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: How slaves calculate resources

Hi,

I was just trying to figure out how Mesos slaves report the amount of resources 
available to them on the host?

We have some slaves running on AWS t2.medium machines (2cpu, 4Gb RAM) with 32GB 
disks.

The slaves are running inside docker containers.

They report 2 cpus (correct), 2.5GB RAM and 4.9GB disk.

Any ideas why this is different from what I can see on the machine? (both on 
the host and within the slave docker container)?

Thanks,
Aaron



--





Sincerely,
Alexander Gallego
Co Founder  CTO



How slaves calculate resources

2015-05-21 Thread Aaron Carey
Hi,

I was just trying to figure out how Mesos slaves report the amount of resources 
available to them on the host?

We have some slaves running on AWS t2.medium machines (2cpu, 4Gb RAM) with 32GB 
disks.

The slaves are running inside docker containers.

They report 2 cpus (correct), 2.5GB RAM and 4.9GB disk.

Any ideas why this is different from what I can see on the machine? (both on 
the host and within the slave docker container)?

Thanks,
Aaron


RE: How slaves calculate resources

2015-05-21 Thread Aaron Carey
I've managed to increase the disksize by playing with some docker options,

Anyone have any idea about the memory?

Thanks,
Aaron


From: Aaron Carey [aca...@ilm.com]
Sent: 21 May 2015 11:19
To: user@mesos.apache.org
Subject: How slaves calculate resources

Hi,

I was just trying to figure out how Mesos slaves report the amount of resources 
available to them on the host?

We have some slaves running on AWS t2.medium machines (2cpu, 4Gb RAM) with 32GB 
disks.

The slaves are running inside docker containers.

They report 2 cpus (correct), 2.5GB RAM and 4.9GB disk.

Any ideas why this is different from what I can see on the machine? (both on 
the host and within the slave docker container)?

Thanks,
Aaron


RE: [Junk released by User action] Re: Batch Scheduler with dependency support

2015-05-14 Thread Aaron Carey
Hi Sharma,

This sounds eerily familiar! Was this an in-house system you were working on, 
or a commercial product?

Thanks,
Aaron


From: Sharma Podila [spod...@netflix.com]
Sent: 13 May 2015 23:49
To: user@mesos.apache.org
Cc: Douglas Thain; Brian Bockelman
Subject: [Junk released by User action] Re: Batch Scheduler with dependency 
support

​I keep longing for folks with decades of experience in HTCHPC to chime in 
on-list.

FWIW, I come from that background, but, am not in that space at this time. My 
prior life was in developing a (not open source) distributed job scheduler and 
management system for batch and interactive jobs that handled dependencies, 
deadlines, preemptions, advance reservation of resources, etc. with multi-level 
priority and share tree hierarchy based allocation. Typically, dependencies and 
deadlines are handled outside of schedulers and fed into schedulers as task 
submission after dependencies have been met. We found it more optimal to have 
the scheduler resolve dependencies and deadlines inherently. This way, a high 
priority job dependent on another low priority job can induce higher priority 
on that dependent job. Similarly, a job with a deadline depending on another 
job's completion can induce an earlier launch of the latter job in order to 
meet it's deadline. Also, a dependent job can reserve its resources in advance, 
knowing the expected completion time of its dependent jobs. This was important 
because in that environment we always had more jobs to run than can run on 
available resources. It wasn't unusual to have 10s of 1000s of jobs waiting in 
queue to run during the day.

Not sure if this helps the original question in this thread in any way. But, I 
am glad to share my learning, if that helps.

Sharma


On Wed, May 13, 2015 at 1:12 PM, Tim St Clair 
tstcl...@redhat.commailto:tstcl...@redhat.com wrote:
Hi Alex,

Have you by chance integrated with any of the tradition batch DAG systems?

http://pegasus.isi.edu/ , http://ccl.cse.nd.edu/software/makeflow/

​​
I keep longing for folks with decades of experience in HTCHPC to chime in 
on-list.

Subtle nudge ;-)
Tim


From: Alex Gaudio adgau...@gmail.commailto:adgau...@gmail.com
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Sent: Wednesday, May 13, 2015 3:04:20 PM

Subject: Re: Batch Scheduler with dependency support

Hi Tim (and everyone else!),

I am the primary author of Stolos.  We use Stolos to run all of our batch jobs 
on Mesos.  The batch jobs are scripts we can run from the command-line.  
Scripts range from bash scripts, Spark jobs and R scripts.

It's a great tool for us because, unlike Chronos, it lets us define a script as 
stage in a dependency chain, where the script can run with different parameters 
for different dependency contexts.  (The closest usage of this would be to have 
many Chronos servers, though this does not work in all cases).

The tool is a critical component of Sailthru's data science infrastructure, but 
I believe we are the only people who use the tool right now.

If you are interested in learning more, I'm happy to invest time to talk more 
about Stolos, what it does and how we use it!

Alex

On Wed, May 13, 2015 at 2:02 PM Tim Chen 
t...@mesosphere.iomailto:t...@mesosphere.io wrote:
How are you running your batch jobs? Is the batch job script/executable an 
in-house app?

Tim

On Wed, May 13, 2015 at 9:46 AM, Andras Kerekes 
andras.kere...@ishisystems.commailto:andras.kere...@ishisystems.com wrote:
You might want to have a look at stolos too:

https://github.com/sailthru/stolos

Andras


From: Aaron Carey [mailto:aca...@ilm.commailto:aca...@ilm.com]
Sent: Wednesday, May 13, 2015 11:54 AM
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: RE: Batch Scheduler with dependency support

Thanks! I hadn't come across that one before :)

From: jeffschr...@gmail.commailto:jeffschr...@gmail.com 
[jeffschr...@gmail.commailto:jeffschr...@gmail.com] on behalf of Jeff 
Schroeder [jeffschroe...@computer.orgmailto:jeffschroe...@computer.org]
Sent: 13 May 2015 16:39
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support
Lookup Hubspot's Singularity

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Thanks Jeff,

Any other options around as well?

From: jeffschr...@gmail.comhttp://UrlBlockedError.aspx 
[jeffschr...@gmail.comhttp://UrlBlockedError.aspx] on behalf of Jeff 
Schroeder [jeffschroe...@computer.orghttp://UrlBlockedError.aspx]
Sent: 13 May 2015 14:12
To: user@mesos.apache.orghttp://UrlBlockedError.aspx
Subject: Batch Scheduler with dependency support
It does both just as well, along with cron-like functionality. It is harder to 
install and takes a bit more understanding however. The official tutorial is a 
process that loops 100 times and then exits.

http

RE: Batch Scheduler with dependency support

2015-05-14 Thread Aaron Carey
Thanks Andras!

This is very interesting, it comes quite close to what we're looking for,

Thanks,
Aaron


From: Andras Kerekes [andras.kere...@ishisystems.com]
Sent: 13 May 2015 17:46
To: user@mesos.apache.org
Subject: RE: Batch Scheduler with dependency support

You might want to have a look at stolos too:

https://github.com/sailthru/stolos

Andras


From: Aaron Carey [mailto:aca...@ilm.com]
Sent: Wednesday, May 13, 2015 11:54 AM
To: user@mesos.apache.org
Subject: RE: Batch Scheduler with dependency support

Thanks! I hadn't come across that one before :)

From: jeffschr...@gmail.commailto:jeffschr...@gmail.com 
[jeffschr...@gmail.com] on behalf of Jeff Schroeder [jeffschroe...@computer.org]
Sent: 13 May 2015 16:39
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support
Lookup Hubspot's Singularity

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Thanks Jeff,

Any other options around as well?

From: jeffschr...@gmail.comUrlBlockedError.aspx 
[jeffschr...@gmail.comUrlBlockedError.aspx] on behalf of Jeff Schroeder 
[jeffschroe...@computer.orgUrlBlockedError.aspx]
Sent: 13 May 2015 14:12
To: user@mesos.apache.orgUrlBlockedError.aspx
Subject: Batch Scheduler with dependency support
It does both just as well, along with cron-like functionality. It is harder to 
install and takes a bit more understanding however. The official tutorial is a 
process that loops 100 times and then exits.

http://aurora.apache.org/documentation/latest/tutorial/#the-script

Aurora is pretty much a superset of most other generic frameworks sans maybe 
hubspot's singularity.

On Wednesday, May 13, 2015, Aaron Carey 
aca...@ilm.comhttp://UrlBlockedError.aspx wrote:
I was under the impression Aurora was for long running services? Is it suitable 
for scheduling one of batch processes too?

thanks,
Aaron

From: jeffschr...@gmail.commailto:jeffschr...@gmail.com 
[jeffschr...@gmail.com] on behalf of Jeff Schroeder [jeffschroe...@computer.org]
Sent: 13 May 2015 13:12
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support
Apache Aurora does this and you can be explicit about the ordering

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Hi All,

I was just wondering if anyone out there knew of a good mesos batch scheduler 
which supports dependencies between tasks? (ie Task B cannot run until Task A 
is complete)

Thanks,
Aaron


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone


RE: Batch Scheduler with dependency support

2015-05-14 Thread Aaron Carey
It varies, some inhouse apps, some commerical software. Although we're wrapping 
everything in a docker container for consistency.

There are usually multiple steps for each task, and these steps can often be 
subdivided into multiple parallel processes. Different steps require different 
executables though.

Aaron


From: Tim Chen [t...@mesosphere.io]
Sent: 13 May 2015 19:01
To: user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support

How are you running your batch jobs? Is the batch job script/executable an 
in-house app?

Tim

On Wed, May 13, 2015 at 9:46 AM, Andras Kerekes 
andras.kere...@ishisystems.commailto:andras.kere...@ishisystems.com wrote:
You might want to have a look at stolos too:

https://github.com/sailthru/stolos

Andras


From: Aaron Carey [mailto:aca...@ilm.commailto:aca...@ilm.com]
Sent: Wednesday, May 13, 2015 11:54 AM
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: RE: Batch Scheduler with dependency support

Thanks! I hadn't come across that one before :)

From: jeffschr...@gmail.commailto:jeffschr...@gmail.com 
[jeffschr...@gmail.commailto:jeffschr...@gmail.com] on behalf of Jeff 
Schroeder [jeffschroe...@computer.orgmailto:jeffschroe...@computer.org]
Sent: 13 May 2015 16:39
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support
Lookup Hubspot's Singularity

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Thanks Jeff,

Any other options around as well?

From: jeffschr...@gmail.comhttp://UrlBlockedError.aspx 
[jeffschr...@gmail.comhttp://UrlBlockedError.aspx] on behalf of Jeff 
Schroeder [jeffschroe...@computer.orghttp://UrlBlockedError.aspx]
Sent: 13 May 2015 14:12
To: user@mesos.apache.orghttp://UrlBlockedError.aspx
Subject: Batch Scheduler with dependency support
It does both just as well, along with cron-like functionality. It is harder to 
install and takes a bit more understanding however. The official tutorial is a 
process that loops 100 times and then exits.

http://aurora.apache.org/documentation/latest/tutorial/#the-script

Aurora is pretty much a superset of most other generic frameworks sans maybe 
hubspot's singularity.

On Wednesday, May 13, 2015, Aaron Carey 
aca...@ilm.comhttp://UrlBlockedError.aspx wrote:
I was under the impression Aurora was for long running services? Is it suitable 
for scheduling one of batch processes too?

thanks,
Aaron

From: jeffschr...@gmail.commailto:jeffschr...@gmail.com 
[jeffschr...@gmail.commailto:jeffschr...@gmail.com] on behalf of Jeff 
Schroeder [jeffschroe...@computer.orgmailto:jeffschroe...@computer.org]
Sent: 13 May 2015 13:12
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support
Apache Aurora does this and you can be explicit about the ordering

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Hi All,

I was just wondering if anyone out there knew of a good mesos batch scheduler 
which supports dependencies between tasks? (ie Task B cannot run until Task A 
is complete)

Thanks,
Aaron


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone



Batch Scheduler with dependency support

2015-05-13 Thread Aaron Carey
Hi All,

I was just wondering if anyone out there knew of a good mesos batch scheduler 
which supports dependencies between tasks? (ie Task B cannot run until Task A 
is complete)

Thanks,
Aaron


RE: Batch Scheduler with dependency support

2015-05-13 Thread Aaron Carey
I was under the impression Aurora was for long running services? Is it suitable 
for scheduling one of batch processes too?

thanks,
Aaron


From: jeffschr...@gmail.com [jeffschr...@gmail.com] on behalf of Jeff Schroeder 
[jeffschroe...@computer.org]
Sent: 13 May 2015 13:12
To: user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support

Apache Aurora does this and you can be explicit about the ordering

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Hi All,

I was just wondering if anyone out there knew of a good mesos batch scheduler 
which supports dependencies between tasks? (ie Task B cannot run until Task A 
is complete)

Thanks,
Aaron


--
Text by Jeff, typos by iPhone


RE: Batch Scheduler with dependency support

2015-05-13 Thread Aaron Carey
Thanks! I hadn't come across that one before :)


From: jeffschr...@gmail.com [jeffschr...@gmail.com] on behalf of Jeff Schroeder 
[jeffschroe...@computer.org]
Sent: 13 May 2015 16:39
To: user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support

Lookup Hubspot's Singularity

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.commailto:aca...@ilm.com 
wrote:
Thanks Jeff,

Any other options around as well?


From: jeffschr...@gmail.comUrlBlockedError.aspx 
[jeffschr...@gmail.comUrlBlockedError.aspx] on behalf of Jeff Schroeder 
[jeffschroe...@computer.orgUrlBlockedError.aspx]
Sent: 13 May 2015 14:12
To: user@mesos.apache.orgUrlBlockedError.aspx
Subject: Batch Scheduler with dependency support

It does both just as well, along with cron-like functionality. It is harder to 
install and takes a bit more understanding however. The official tutorial is a 
process that loops 100 times and then exits.

http://aurora.apache.org/documentation/latest/tutorial/#the-script

Aurora is pretty much a superset of most other generic frameworks sans maybe 
hubspot's singularity.

On Wednesday, May 13, 2015, Aaron Carey 
aca...@ilm.comhttp://UrlBlockedError.aspx wrote:
I was under the impression Aurora was for long running services? Is it suitable 
for scheduling one of batch processes too?

thanks,
Aaron


From: jeffschr...@gmail.com [jeffschr...@gmail.com] on behalf of Jeff Schroeder 
[jeffschroe...@computer.org]
Sent: 13 May 2015 13:12
To: user@mesos.apache.org
Subject: Re: Batch Scheduler with dependency support

Apache Aurora does this and you can be explicit about the ordering

On Wednesday, May 13, 2015, Aaron Carey aca...@ilm.com wrote:
Hi All,

I was just wondering if anyone out there knew of a good mesos batch scheduler 
which supports dependencies between tasks? (ie Task B cannot run until Task A 
is complete)

Thanks,
Aaron


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone


--
Text by Jeff, typos by iPhone


RE: Using mesos-dns in an enterprise

2015-04-13 Thread Aaron Carey
Thanks for this... very useful!


From: Christos Kozyrakis [kozyr...@gmail.com]
Sent: 07 April 2015 23:25
To: user@mesos.apache.org
Cc: John Omernik
Subject: Re: Using mesos-dns in an enterprise

This is a great thread, thanks for starting it John.
I will transcode your message into a tutorial on the Mesos-DNS documentation. I 
will ping you to take a look and edit as needed (that goes to all of you with 
some experience on the topic).

On Thu, Apr 2, 2015 at 5:58 PM, John Omernik 
j...@omernik.commailto:j...@omernik.com wrote:
Mesos-dns seems pretty light weight, why not constrain it to a group of 3-5 
hosts, and then list all of them as your forwarding resolvers. While not truly 
run anywhere, I would imagine with some good node/rack placement you would be 
sufficiently HA

On Thursday, April 2, 2015, Tom Arnfeld 
t...@duedil.commailto:t...@duedil.com wrote:
We're using a BGP based solution currently to solve the problem of highly 
available DNS resolvers.

That might be a route worth taking, and one that could still work via marathon 
on top of Mesos.

--

Tom Arnfeld
Developer // DueDil

(+44) 7525940046tel:%28%2B44%29%207525940046
25 Christopher Street, London, EC2A 2BS



On Thu, Apr 2, 2015 at 10:07 PM, John Omernik j...@omernik.com wrote:

True :)


On Thu, Apr 2, 2015 at 3:37 PM, Tom Arnfeld t...@duedil.com wrote:
Last time I checked haproxy didn't support UDP which would be key for mesos-dns.

--

Tom Arnfeld
Developer // DueDil

(+44) 7525940046tel:%28%2B44%29%207525940046
25 Christopher Street, London, EC2A 2BS



On Thu, Apr 2, 2015 at 3:53 PM, John Omernik j...@omernik.com wrote:

That was my first response as well... I work at a bank, and the thought of 
changing dns servers on the clients everywhere made me roll my eyes :)

John


On Thu, Apr 2, 2015 at 9:39 AM, Tom Arnfeld t...@duedil.com wrote:
This is great, thanks for sharing!

It's nice to see other members of the community sharing more realistic 
implementations of DNS rather than just update your resolv conf and it works 
:-)

--

Tom Arnfeld
Developer // DueDil

(+44) 7525940046tel:%28%2B44%29%207525940046
25 Christopher Street, London, EC2A 2BS



On Thu, Apr 2, 2015 at 3:30 PM, John Omernik j...@omernik.com wrote:

Based on my earlier emails about the state of service discovery.  I did some 
research and a little writeup on how to use mesos-dns as a forward lookup zone 
in a enterprise bind installation. I feel this is more secure, and more 
comfortable for an enterprise DNS team as opposed to changing the first 
resolver on every client that may interact with mesos to be the mesos-dns 
server.  Please feel free to modify/correct and include this in the mesos-dns 
documentation if you feel it's valuable.


Goals/Thought Process
- Run mesos-dns on a non-standard port. (such as 8053).  This allows you to run 
it as a non-root user.
- While most DNS clients may not understand this (a different port), in an 
enterprise, most DNS servers will respect a forward lookup zone with a server 
using a different port.
- Setup below for BIND9 allows you to keep all your mesos servers AND clients 
in an enterprise pointing their requests at your enterprise DNS server, rather 
than mesos-dns.
  - This is easier from an enterprise configuration standpoint. Make one change 
on your dns servers, rather than adding a resolver on all the clients.
  - This is more secure in that you can run mesos-dns as non-root (53 is a 
privileged port, 8053 is not) no sudo required
  - For more security, you can limit connections to the mesos-dns server to 
only your enterprise dns servers. This could help mitigate any unknown 
vulnerabilities in mesos-dns.
  - This allows you to HA mesos-dns in that you can specify multiple resolvers 
for your bind configuration.




Bind9 Config
This was put into my named.conf.local It sets up the .mesos zone and forwards 
to mesos dns. All my mesos servers already pointed at this server, therefore no 
client changes required.


#192.168.0.100 is my host running mesos DNS
zone mesos {
type forward;
forward only;
forwarders { 192.168.0.100 port 8053; };
};




config.json mesos-dns config file.
I DID specify my internal DNS server in the resolvers (192.168.0.10) however, I 
am not sure if I need to do this.  Since only requests for .mesos will actually 
be sent to mesos-dns.

{
  masters: [192.168.0.98:5050http://192.168.0.98:5050],
  refreshSeconds: 60,
  ttl: 60,
  domain: mesos,
  port: 8053,
  resolvers: [192.168.0.10],
  timeout: 5,
  listener: 0.0.0.0,
  email: root.mesos-dns.mesos
}


marathon start json
Note the lack of sudo here. I also constrained it to one host for now, but that 
could change if needed.

{
cmd: /mapr/brewpot/mesos/mesos-dns/mesos-dns 
-config=/mapr/brewpot/mesos/mesos-dns/config.json,
cpus: 1.0,
mem: 1024,
id: mesos-dns,
instances: 1,
constraints: [[hostname, CLUSTER, 
hadoopmapr1.brewingintel.comhttp://hadoopmapr1.brewingintel.com]]
}







--
Sent from my iThing



--

RE: Unable to install subversion-devel 1.8+ on CentOS 7

2015-03-23 Thread Aaron Carey
Not sure if this helps, but we've been using docker to run Mesos on Centos 7 
hosts.


From: craig w [codecr...@gmail.com]
Sent: 23 March 2015 12:06
To: user@mesos.apache.org
Subject: Unable to install subversion-devel 1.8+ on CentOS 7

Mesos 0.21.0+ requires subversion-devel 1.8+, which can be installed by adding 
the Wandisco yum repo. However, it appears that subversion-devel 1.8+ requires 
libsasl2.so.2, which is not available on CentOS7.

I've seen one person try to create a symlink to libsasl2.so.3 and it worked 
[1], while another person found it did not work [2].

I created a CentOS 7 droplet on DigitalOcean, added the Wandisco repo and tried 
to install subversion-devel and it failed b/c of the libsasl2.so.2 missing. I 
tried creating a symlink (ln -s /usr/lib64/libsasl2.so.3 
/usr/lib64/libsasl2.so.2), restarting the server and installing 
subversion-devel still failed b/c of the libsasl issue:

Error: Package: subversion-1.8.11-1.x86_64 (WandiscoSVN)
   Requires: libsasl2.so.2()(64bit)

Anyone had any success on CentOS 7 with Mesos 0.21+?

[1] - did work: http://unix.stackexchange.com/a/178408

[2] - did not 
work:http://www.wandisco.com/svnforum/forum/smartsvn-community/smartsvn-help-and-support/69834-installing-subversion-command-line-client-on-centos-7


RE: Unable to install subversion-devel 1.8+ on CentOS 7

2015-03-23 Thread Aaron Carey
ah interesting.. what causes this difference?

I think this probably makes sense for our setup currently..


From: craig w [codecr...@gmail.com]
Sent: 23 March 2015 12:20
To: user@mesos.apache.org
Subject: Re: Unable to install subversion-devel 1.8+ on CentOS 7

I had considered running Mesos in docker containers, however with Mesos 0.21 if 
the slaves are running in a container and you have tasks running in containers, 
if the slave container were to exit/die, the tasks running in containers would 
also exit. If running mesos-slave on the host, any tasks that it had running 
will remain running even if the process dies. That's why I hadn't gone that 
route. Have you considered that?

On Mon, Mar 23, 2015 at 8:15 AM, craig w 
codecr...@gmail.commailto:codecr...@gmail.com wrote:
What OS is your docker image based on?

On Mon, Mar 23, 2015 at 8:11 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
Not sure if this helps, but we've been using docker to run Mesos on Centos 7 
hosts.


From: craig w [codecr...@gmail.commailto:codecr...@gmail.com]
Sent: 23 March 2015 12:06
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Unable to install subversion-devel 1.8+ on CentOS 7

Mesos 0.21.0+ requires subversion-devel 1.8+, which can be installed by adding 
the Wandisco yum repo. However, it appears that subversion-devel 1.8+ requires 
libsasl2.so.2, which is not available on CentOS7.

I've seen one person try to create a symlink to libsasl2.so.3 and it worked 
[1], while another person found it did not work [2].

I created a CentOS 7 droplet on DigitalOcean, added the Wandisco repo and tried 
to install subversion-devel and it failed b/c of the libsasl2.so.2 missing. I 
tried creating a symlink (ln -s /usr/lib64/libsasl2.so.3 
/usr/lib64/libsasl2.so.2), restarting the server and installing 
subversion-devel still failed b/c of the libsasl issue:

Error: Package: subversion-1.8.11-1.x86_64 (WandiscoSVN)
   Requires: libsasl2.so.2()(64bit)

Anyone had any success on CentOS 7 with Mesos 0.21+?

[1] - did work: http://unix.stackexchange.com/a/178408

[2] - did not 
work:http://www.wandisco.com/svnforum/forum/smartsvn-community/smartsvn-help-and-support/69834-installing-subversion-command-line-client-on-centos-7



--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links



--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


RE: Unable to install subversion-devel 1.8+ on CentOS 7

2015-03-23 Thread Aaron Carey
Thanks, that's very useful to know!


From: craig w [codecr...@gmail.com]
Sent: 23 March 2015 12:41
To: user@mesos.apache.org
Subject: Re: Unable to install subversion-devel 1.8+ on CentOS 7

https://issues.apache.org/jira/browse/MESOS-2115

On Mon, Mar 23, 2015 at 8:30 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
ah interesting.. what causes this difference?

I think this probably makes sense for our setup currently..


From: craig w [codecr...@gmail.commailto:codecr...@gmail.com]
Sent: 23 March 2015 12:20
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Unable to install subversion-devel 1.8+ on CentOS 7

I had considered running Mesos in docker containers, however with Mesos 0.21 if 
the slaves are running in a container and you have tasks running in containers, 
if the slave container were to exit/die, the tasks running in containers would 
also exit. If running mesos-slave on the host, any tasks that it had running 
will remain running even if the process dies. That's why I hadn't gone that 
route. Have you considered that?

On Mon, Mar 23, 2015 at 8:15 AM, craig w 
codecr...@gmail.commailto:codecr...@gmail.com wrote:
What OS is your docker image based on?

On Mon, Mar 23, 2015 at 8:11 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:
Not sure if this helps, but we've been using docker to run Mesos on Centos 7 
hosts.


From: craig w [codecr...@gmail.commailto:codecr...@gmail.com]
Sent: 23 March 2015 12:06
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Unable to install subversion-devel 1.8+ on CentOS 7

Mesos 0.21.0+ requires subversion-devel 1.8+, which can be installed by adding 
the Wandisco yum repo. However, it appears that subversion-devel 1.8+ requires 
libsasl2.so.2, which is not available on CentOS7.

I've seen one person try to create a symlink to libsasl2.so.3 and it worked 
[1], while another person found it did not work [2].

I created a CentOS 7 droplet on DigitalOcean, added the Wandisco repo and tried 
to install subversion-devel and it failed b/c of the libsasl2.so.2 missing. I 
tried creating a symlink (ln -s /usr/lib64/libsasl2.so.3 
/usr/lib64/libsasl2.so.2), restarting the server and installing 
subversion-devel still failed b/c of the libsasl issue:

Error: Package: subversion-1.8.11-1.x86_64 (WandiscoSVN)
   Requires: libsasl2.so.2()(64bit)

Anyone had any success on CentOS 7 with Mesos 0.21+?

[1] - did work: http://unix.stackexchange.com/a/178408

[2] - did not 
work:http://www.wandisco.com/svnforum/forum/smartsvn-community/smartsvn-help-and-support/69834-installing-subversion-command-line-client-on-centos-7



--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links



--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links



--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


RE: Zookeeper integration for Mesos-DNS

2015-03-23 Thread Aaron Carey
As I understood it, it provides a service for containers within the cluster to 
automatically find each other as it handles their dns calls?

However clients outside the cluster will not use the mesos-dns service by 
default, so won't have knowledge of anything running inside the cluster?

Is there an easy way to set this up to (for example) add records to AWS Route 
53 when services get started in the cluster, so other clients can see them?

Thanks!
Aaron


From: Ken Sipe [kens...@gmail.com]
Sent: 23 March 2015 13:31
To: user@mesos.apache.org
Subject: Re: Zookeeper integration for Mesos-DNS

Aaron,

It depends on what you mean however, Mesos-DNS works outside the cluster IMO. 
It is a bridge for things in the cluster (services launched by mesos)... But at 
that point it is DNS.  Any client in or out of the cluster that can query DNS 
that leverage the service.

Sent from my iPhone

On Mar 23, 2015, at 4:25 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:

Hey,

I don't suppose there is anything like Mesos-DNS but for services/users outside 
the mesos cluster? So having a service which updates a DNS provider with task 
port/ips running inside the cluster so that external users are able to find 
those services? Am I correct in thinking Mesos-DNS only works inside the 
cluster?

Currently we're using consul for this, but I'd be interested if there was some 
sort of magical plug and play solution?

Thanks,
Aaron


From: Christos Kozyrakis [kozyr...@gmail.commailto:kozyr...@gmail.com]
Sent: 21 March 2015 00:18
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Zookeeper integration for Mesos-DNS

Hi everybody,

we have updated Mesos-DNS to integrate directly with Zookeeper. Instead of 
providing Mesos-DNS with a list of masters, you point it to the Zookeeper 
instances. Meson-DNS will watch Zookeeper to detect the current leading master. 
So, while the list of Zookeeper instances is configured in a static manner, 
Mesos masters can be added or removed freely without restarting Mesos-DNS.

The integration with Zookeeper forced to switch from -v and -vv as the flags to 
control verbosity to -v=0 (default), -v=1 (verbose), and -v=2 (very verbose).

To reduce complications because of dependencies to other packages, we have also 
started using godep.

Please take a look at the branch https://github.com/mesosphere/mesos-dns/tree/zk
and provide us with any feedback on the code or the documentation.

Thanks

--
Christos


RE: Zookeeper integration for Mesos-DNS

2015-03-23 Thread Aaron Carey
lovely, thanks!


From: craig w [codecr...@gmail.com]
Sent: 23 March 2015 15:35
To: user@mesos.apache.org
Subject: Re: Zookeeper integration for Mesos-DNS

Keep in mind DNS will give you the ipaddress of the host, so 
rabbitmq.marathon.mesos will resolve to some IP address. Do get port 
information you have to query mesos-dns for its SRV records.

On Mon, Mar 23, 2015 at 11:29 AM, Ken Sipe 
kens...@gmail.commailto:kens...@gmail.com wrote:
roger that

On Mar 23, 2015, at 9:22 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:

Thanks Ken,

So basically we just need to add mesos-dns to our /etc/resolv.conf on every 
machine and hey presto auto-service discovery (using DNS)? (Here I mean service 
discovery to be: hey where is rabbitmq? DNS says: 172.20.121.292:8393 or 
whatever)

Aaron


From: Ken Sipe [kens...@gmail.commailto:kens...@gmail.com]
Sent: 23 March 2015 14:29
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Zookeeper integration for Mesos-DNS

Aaron,

Mesos-DNS is a DNS name server + a monitor of mesos-masters.  It listens to the 
mesos-master.  If a service is launched by mesos then mesos-dns conjures a 
service name (app_id + framework_id +.mesos) and associates it to the IP and 
PORT of the service.  Since Mesos-DNS is a name service, it needs to be in your 
list of name services for service discovery.  From a service discovery stand 
point there is no need to be in the cluster and there is no need to have a 
dependency on Mesos.

Mesos-DNS is not a proxy.  It doesn’t provide any special services to clients 
or services inside the cluster.   more detail below.

On Mar 23, 2015, at 7:52 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:

As I understood it, it provides a service for containers within the cluster to 
automatically find each other as it handles their dns calls?

The way this is stated this doesn’t seem true.Mesos-DNS is a DNS name 
server.From a service discovery stand point, It doesn’t do anything 
different than a standard DNS naming server.


However clients outside the cluster will not use the mesos-dns service by 
default, so won't have knowledge of anything running inside the cluster?

This is all dependent on how /etc/resolv.conf is setup.  If mesos-dns is in the 
list… then this is not true.


Is there an easy way to set this up to (for example) add records to AWS Route 
53 when services get started in the cluster, so other clients can see them?

This is outside of Mesos-DNS

Good Luck!!

Thanks!
Aaron


From: Ken Sipe [kens...@gmail.commailto:kens...@gmail.com]
Sent: 23 March 2015 13:31
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Re: Zookeeper integration for Mesos-DNS

Aaron,

It depends on what you mean however, Mesos-DNS works outside the cluster IMO. 
It is a bridge for things in the cluster (services launched by mesos)... But at 
that point it is DNS.  Any client in or out of the cluster that can query DNS 
that leverage the service.

Sent from my iPhone

On Mar 23, 2015, at 4:25 AM, Aaron Carey 
aca...@ilm.commailto:aca...@ilm.com wrote:

Hey,

I don't suppose there is anything like Mesos-DNS but for services/users outside 
the mesos cluster? So having a service which updates a DNS provider with task 
port/ips running inside the cluster so that external users are able to find 
those services? Am I correct in thinking Mesos-DNS only works inside the 
cluster?

Currently we're using consul for this, but I'd be interested if there was some 
sort of magical plug and play solution?

Thanks,
Aaron


From: Christos Kozyrakis [kozyr...@gmail.commailto:kozyr...@gmail.com]
Sent: 21 March 2015 00:18
To: user@mesos.apache.orgmailto:user@mesos.apache.org
Subject: Zookeeper integration for Mesos-DNS

Hi everybody,

we have updated Mesos-DNS to integrate directly with Zookeeper. Instead of 
providing Mesos-DNS with a list of masters, you point it to the Zookeeper 
instances. Meson-DNS will watch Zookeeper to detect the current leading master. 
So, while the list of Zookeeper instances is configured in a static manner, 
Mesos masters can be added or removed freely without restarting Mesos-DNS.

The integration with Zookeeper forced to switch from -v and -vv as the flags to 
control verbosity to -v=0 (default), -v=1 (verbose), and -v=2 (very verbose).

To reduce complications because of dependencies to other packages, we have also 
started using godep.

Please take a look at the branch https://github.com/mesosphere/mesos-dns/tree/zk
and provide us with any feedback on the code or the documentation.

Thanks

--
Christos




--

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


RE: Zookeeper integration for Mesos-DNS

2015-03-23 Thread Aaron Carey
Hey,

I don't suppose there is anything like Mesos-DNS but for services/users outside 
the mesos cluster? So having a service which updates a DNS provider with task 
port/ips running inside the cluster so that external users are able to find 
those services? Am I correct in thinking Mesos-DNS only works inside the 
cluster?

Currently we're using consul for this, but I'd be interested if there was some 
sort of magical plug and play solution?

Thanks,
Aaron


From: Christos Kozyrakis [kozyr...@gmail.com]
Sent: 21 March 2015 00:18
To: user@mesos.apache.org
Subject: Zookeeper integration for Mesos-DNS

Hi everybody,

we have updated Mesos-DNS to integrate directly with Zookeeper. Instead of 
providing Mesos-DNS with a list of masters, you point it to the Zookeeper 
instances. Meson-DNS will watch Zookeeper to detect the current leading master. 
So, while the list of Zookeeper instances is configured in a static manner, 
Mesos masters can be added or removed freely without restarting Mesos-DNS.

The integration with Zookeeper forced to switch from -v and -vv as the flags to 
control verbosity to -v=0 (default), -v=1 (verbose), and -v=2 (very verbose).

To reduce complications because of dependencies to other packages, we have also 
started using godep.

Please take a look at the branch https://github.com/mesosphere/mesos-dns/tree/zk
and provide us with any feedback on the code or the documentation.

Thanks

--
Christos


Deploying containers to every mesos slave node

2015-03-12 Thread Aaron Carey
Hi All,

In setting up our cluster, we require things like consul to be running on all 
of our nodes. I was just wondering if there was any sort of best practice (or a 
scheduler perhaps) that people could share for this sort of thing?

Currently the approach is to use salt to provision each node and add 
consul/mesos slave process and so on to it, but it'd be nice to remove the 
dependency on salt.

Thanks,
Aaron