How production un-ready are Mesos Cassandra, Spark and Kafka Frameworks?

2015-10-12 Thread Chris Elsmore
Hi all,

Have just got back from a brilliant MesosCon Europe in Dublin, I learnt a huge 
amount and a big thank-you for putting on a great conference to all involved!


I am looking to deploy a small (maybe 5 max) Cassandra & Spark cluster to do 
some data analysis at my current employer, and am a little unsure of the 
current status of the frameworks this would need to run on Mesos- both the 
mesosphere docs (which I’m guessing use the frameworks of the same name hosted 
on Github) and the Github ReadMes mention that these are not production ready, 
and the rough timeline of Q1 2016.

I’m just wondering how production un-ready these are!? I am looking at using 
Mesos to deploy future stateless services in the next 6 months or so, and so I 
like the idea of adding to that system and the look of the configuration that 
is handled for you to bind nodes together in these frameworks. However it feels 
like for a smallish cluster of production ready machines it might be better to 
deploy them standalone and stay observant on the status of such things in the 
near future, and the configuration wins are not that large especially for a 
small cluster.


Any experience and advice on the above would be greatly received!


Chris





Re:

2015-10-12 Thread Brenden Matthews
I can assure you it's much closer to being "production ready" than running
C* on Marathon.

On Mon, Oct 12, 2015 at 8:48 AM, Rafael Capucho 
wrote:

> Hello @Brender and @Rad,
>
> @Brender, we know about mesos-cassandra, but it says that it isnt
> production ready:
>
> "*DISCLAIMER* *This is a very early version of Cassandra-Mesos framework.
> This document, code behavior, and anything else may change without notice
> and/or break older installations."*
>
> And we will use it in production, then we think that its better to hand
> craft the configuration (because we can config like we need) instead of use
> something that could change soon and we don't have much control.
>
> @Rad, Thank you, It will help a lot.
>
>
>
>
>
> On Mon, Oct 12, 2015 at 11:45 AM, Brenden Matthews 
> wrote:
>
>> It's worth noting that there's a purpose-built framework for Cassandra:
>> https://github.com/mesosphere/cassandra-mesos
>>
>> You probably want to use this instead of trying to run C* on Marathon.
>>
>> On Sun, Oct 11, 2015 at 6:05 PM, Rad Gruchalski 
>> wrote:
>>
>>> Rafael,
>>>
>>> According to the cassandra documentation, you should not be affected at
>>> all:
>>>
>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html
>>>
>>> However, your performance with these settings will be rather poor.
>>>
>>> Kind regards,
>>> Radek Gruchalski
>>> ra...@gruchalski.com 
>>> de.linkedin.com/in/radgruchalski/
>>>
>>>
>>> *Confidentiality:*This communication is intended for the above-named
>>> person and may be confidential and/or legally privileged.
>>> If it has come to you in error you must take no action based on it, nor
>>> must you copy or show it to anyone; please delete/destroy and inform the
>>> sender immediately.
>>>
>>> On Monday, 12 October 2015 at 02:42, Rafael Capucho wrote:
>>>
>>> Hello!,
>>>
>>> I'm using the follow marathon script [1] to launch cassandra non-seeds
>>> nodes, it is working properly.
>>>
>>> [1] - http://hastebin.com/visujikela.lua
>>>
>>> As you can see in the script, I'm limiting CPU and Memory.
>>>
>>> But some nodes of my cluster is not that big, even because the cluster
>>> isn't big yet. As we know, cassandra generally uses a lot of memory as
>>> cache etc..
>>>
>>> 1) I would like to know if Mesos will kill (and keep killing) Cassandra
>>> process if it reach the memory limit? if Yes, how can I block Mesos from
>>> kill it?
>>>
>>> 2) If I have one server with 4gb memory where I deployed Mesos Slave,
>>> and I create a container (by using marathon) with mem=1024 (for example)
>>> the processing within that container, when they ask about the memory
>>> available, they will receive 4gb or 1gb?
>>>
>>> Thank you!
>>>
>>>
>>> --
>>> Rafael Capucho
>>>
>>> Bachelor of Computer Science
>>> Federal University of São Paulo
>>> Institute of Science and Technology - ICT
>>>
>>> PGP-Public Key: 2048R/7389A96F pgp.mit.edu
>>> FP: EDB5 CDEE 8442 99CC C92D 9173 6B32 A5C9 7389 A96F
>>>
>>>
>>>
>>
>
>
> --
> --
> Rafael Capucho
>
> Bachelor of Computer Science
> Federal University of São Paulo
> Institute of Science and Technology - ICT
>
> PGP-Public Key: 2048R/7389A96F pgp.mit.edu
> FP: EDB5 CDEE 8442 99CC C92D 9173 6B32 A5C9 7389 A96F
>


Re:

2015-10-12 Thread Rafael Capucho
Hello @Brender and @Rad,

@Brender, we know about mesos-cassandra, but it says that it isnt
production ready:

"*DISCLAIMER* *This is a very early version of Cassandra-Mesos framework.
This document, code behavior, and anything else may change without notice
and/or break older installations."*

And we will use it in production, then we think that its better to hand
craft the configuration (because we can config like we need) instead of use
something that could change soon and we don't have much control.

@Rad, Thank you, It will help a lot.





On Mon, Oct 12, 2015 at 11:45 AM, Brenden Matthews 
wrote:

> It's worth noting that there's a purpose-built framework for Cassandra:
> https://github.com/mesosphere/cassandra-mesos
>
> You probably want to use this instead of trying to run C* on Marathon.
>
> On Sun, Oct 11, 2015 at 6:05 PM, Rad Gruchalski 
> wrote:
>
>> Rafael,
>>
>> According to the cassandra documentation, you should not be affected at
>> all:
>>
>> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html
>>
>> However, your performance with these settings will be rather poor.
>>
>> Kind regards,
>> Radek Gruchalski
>> ra...@gruchalski.com 
>> de.linkedin.com/in/radgruchalski/
>>
>>
>> *Confidentiality:*This communication is intended for the above-named
>> person and may be confidential and/or legally privileged.
>> If it has come to you in error you must take no action based on it, nor
>> must you copy or show it to anyone; please delete/destroy and inform the
>> sender immediately.
>>
>> On Monday, 12 October 2015 at 02:42, Rafael Capucho wrote:
>>
>> Hello!,
>>
>> I'm using the follow marathon script [1] to launch cassandra non-seeds
>> nodes, it is working properly.
>>
>> [1] - http://hastebin.com/visujikela.lua
>>
>> As you can see in the script, I'm limiting CPU and Memory.
>>
>> But some nodes of my cluster is not that big, even because the cluster
>> isn't big yet. As we know, cassandra generally uses a lot of memory as
>> cache etc..
>>
>> 1) I would like to know if Mesos will kill (and keep killing) Cassandra
>> process if it reach the memory limit? if Yes, how can I block Mesos from
>> kill it?
>>
>> 2) If I have one server with 4gb memory where I deployed Mesos Slave, and
>> I create a container (by using marathon) with mem=1024 (for example) the
>> processing within that container, when they ask about the memory available,
>> they will receive 4gb or 1gb?
>>
>> Thank you!
>>
>>
>> --
>> Rafael Capucho
>>
>> Bachelor of Computer Science
>> Federal University of São Paulo
>> Institute of Science and Technology - ICT
>>
>> PGP-Public Key: 2048R/7389A96F pgp.mit.edu
>> FP: EDB5 CDEE 8442 99CC C92D 9173 6B32 A5C9 7389 A96F
>>
>>
>>
>


-- 
--
Rafael Capucho

Bachelor of Computer Science
Federal University of São Paulo
Institute of Science and Technology - ICT

PGP-Public Key: 2048R/7389A96F pgp.mit.edu
FP: EDB5 CDEE 8442 99CC C92D 9173 6B32 A5C9 7389 A96F


Re:

2015-10-12 Thread Brenden Matthews
It's worth noting that there's a purpose-built framework for Cassandra:
https://github.com/mesosphere/cassandra-mesos

You probably want to use this instead of trying to run C* on Marathon.

On Sun, Oct 11, 2015 at 6:05 PM, Rad Gruchalski 
wrote:

> Rafael,
>
> According to the cassandra documentation, you should not be affected at
> all:
>
> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html
>
> However, your performance with these settings will be rather poor.
>
> Kind regards,
> Radek Gruchalski
> ra...@gruchalski.com 
> de.linkedin.com/in/radgruchalski/
>
>
> *Confidentiality:*This communication is intended for the above-named
> person and may be confidential and/or legally privileged.
> If it has come to you in error you must take no action based on it, nor
> must you copy or show it to anyone; please delete/destroy and inform the
> sender immediately.
>
> On Monday, 12 October 2015 at 02:42, Rafael Capucho wrote:
>
> Hello!,
>
> I'm using the follow marathon script [1] to launch cassandra non-seeds
> nodes, it is working properly.
>
> [1] - http://hastebin.com/visujikela.lua
>
> As you can see in the script, I'm limiting CPU and Memory.
>
> But some nodes of my cluster is not that big, even because the cluster
> isn't big yet. As we know, cassandra generally uses a lot of memory as
> cache etc..
>
> 1) I would like to know if Mesos will kill (and keep killing) Cassandra
> process if it reach the memory limit? if Yes, how can I block Mesos from
> kill it?
>
> 2) If I have one server with 4gb memory where I deployed Mesos Slave, and
> I create a container (by using marathon) with mem=1024 (for example) the
> processing within that container, when they ask about the memory available,
> they will receive 4gb or 1gb?
>
> Thank you!
>
>
> --
> Rafael Capucho
>
> Bachelor of Computer Science
> Federal University of São Paulo
> Institute of Science and Technology - ICT
>
> PGP-Public Key: 2048R/7389A96F pgp.mit.edu
> FP: EDB5 CDEE 8442 99CC C92D 9173 6B32 A5C9 7389 A96F
>
>
>


error: 'sasl_errdetail' is deprecated: first deprecated in OS X 10.11

2015-10-12 Thread yuankui
hello,buddies

I'm compiling mesos on mac os x 10.11 (EI capitan) and come across with some 
error as flowing
version: mesos-0.24.0 & mesos-0.25.0-rc3


/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^
../../src/authentication/cram_md5/authenticator.cpp:334:20: error: 
'sasl_errdetail' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 string error(sasl_errdetail(connection));
  ^
/usr/include/sasl/sasl.h:770:25: note: 'sasl_errdetail' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errdetail(sasl_conn_t *conn) 
__OSX_AVAILABLE_BUT_DEPRECATED(__MAC_10_0,__MAC_10_11,__IPHONE_NA,__IPHONE_NA);
   ^
../../src/authentication/cram_md5/authenticator.cpp:514:18: error: 
'sasl_server_init' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
   int result = sasl_server_init(NULL, "mesos");
^
/usr/include/sasl/sasl.h:1016:17: note: 'sasl_server_init' has been explicitly 
marked deprecated here
LIBSASL_API int sasl_server_init(const sasl_callback_t *callbacks,
   ^
../../src/authentication/cram_md5/authenticator.cpp:519:11: error: 
'sasl_errstring' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 sasl_errstring(result, NULL, NULL));
 ^
/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^
../../src/authentication/cram_md5/authenticator.cpp:521:16: error: 
'sasl_auxprop_add_plugin' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 result = sasl_auxprop_add_plugin(
  ^
/usr/include/sasl/saslplug.h:1013:17: note: 'sasl_auxprop_add_plugin' has been 
explicitly marked deprecated here
LIBSASL_API int sasl_auxprop_add_plugin(const char *plugname,
   ^
../../src/authentication/cram_md5/authenticator.cpp:528:13: error: 
'sasl_errstring' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
   sasl_errstring(result, NULL, NULL));
   ^
/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^

as I'm not familiar with c++, I don't know how to solve this

I believe I'm not the first one who came across with this problem, So I'm here 
for help!
thanks.




Re: [VOTE] Release Apache Mesos 0.25.0 (rc3)

2015-10-12 Thread Brenden Matthews
+1 (binding)

Tested on CI.

On Sun, Oct 11, 2015 at 4:12 AM, Michael Park  wrote:

> +1 (binding)
>
> Ran *make distcheck* successfully on Ubuntu 14.04 with gcc + clang, CentOS
> 7.1 with gcc
> Ran *make check* with one non-blocker failure (MESOS-3604
> ) on OS X El Capitan
> with
> clang
>
> On Sat, Oct 10, 2015 at 5:44 PM Kapil Arya  wrote:
>
> > +1 (non-binding)
> >
> > On Sat, Oct 10, 2015 at 9:58 AM, Joris Van Remoortere <
> jo...@mesosphere.io
> > >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On Fri, Oct 9, 2015 at 5:36 PM, Niklas Nielsen 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > Following up with an RC with the build fix suggested by Kapil:
> > > >
> > > > Please vote on releasing the following candidate as Apache Mesos
> > 0.25.0.
> > > >
> > > >
> > > >
> > > > 0.25.0 includes the following:
> > > >
> > > >
> > > >
> > >
> >
> 
> > > >
> > > >  * [MESOS-1474] - Experimental support for maintenance primitives.
> > > >
> > > >  * [MESOS-2600] - Added master endpoints /reserve and /unreserve for
> > > > dynamic reservations.
> > > >
> > > >  * [MESOS-2044] - Extended Module APIs to enable IP per container
> > > > assignment, isolation and resolution.
> > > >
> > > >
> > > > ** Bug fixes
> > > >
> > > >   * [MESOS-2635] - Web UI Display Bug when starting lots of tasks
> with
> > > > small cpu value.
> > > >
> > > >   * [MESOS-2986] - Docker version output is not compatible with
> Mesos.
> > > >
> > > >   * [MESOS-3046] - Stout's UUID re-seeds a new random generator
> during
> > > > each call to UUID::random.
> > > >
> > > >   * [MESOS-3051] - performance issues with port ranges comparison.
> > > >
> > > >   * [MESOS-3052] - Allocator performance issue when using a large
> > number
> > > > of filters.
> > > >
> > > >   * [MESOS-3136] - COMMAND health checks with Marathon 0.10.0 are
> > broken.
> > > >
> > > >   * [MESOS-3169] - FrameworkInfo should only be updated if the
> > > > re-registration is valid.
> > > >
> > > >   * [MESOS-3185] - Refactor Subprocess logic in linux/perf.cpp to use
> > > > common subroutine.
> > > >
> > > >   * [MESOS-3239] - Refactor master HTTP endpoints help messages such
> > that
> > > > they cannot be out of sync.
> > > >
> > > >   * [MESOS-3245] - The comments of DRFSorter::dirty is not correct.
> > > >
> > > >   * [MESOS-3254] - Cgroup CHECK fails test harness.
> > > >
> > > >   * [MESOS-3258] - Remove Frameworkinfo capabilities on
> > re-registration.
> > > >
> > > >   * [MESOS-3261] - Move QoS plug-ins to a specified folder like
> > > > resource_estimator.
> > > >
> > > >   * [MESOS-3269] - The comments of Master::updateSlave() is not
> > correct.
> > > >
> > > >   * [MESOS-3282] - Web UI no longer shows Tasks information.
> > > >
> > > >   * [MESOS-3344] - Add more comments for strings::internal::fmt.
> > > >
> > > >   * [MESOS-3351] - duplicated slave id in master after master
> failover.
> > > >
> > > >   * [MESOS-3387] - Refactor MesosContainerizer to accept namespace
> > > > dynamically.
> > > >
> > > >   * [MESOS-3408] - Labels field of FrameworkInfo should be added into
> > v1
> > > > mesos.proto.
> > > >
> > > >   * [MESOS-3411] - ReservationEndpointsTest.AvailableResources
> appears
> > to
> > > > be faulty.
> > > >
> > > >   * [MESOS-3423] - Perf event isolator stops performing sampling if a
> > > > single timeout occurs.
> > > >
> > > >   * [MESOS-3426] - process::collect and process::await do not perform
> > > > discard propagation.
> > > >
> > > >   * [MESOS-3430] -
> > > >
> LinuxFilesystemIsolatorTest.ROOT_PersistentVolumeWithoutRootFilesystem
> > > > fails on CentOS 7.1.
> > > >
> > > >   * [MESOS-3450] - Update Mesos C++ Style Guide for namespace usage.
> > > >
> > > >   * [MESOS-3451] - Failing tests after changes to
> > > > Isolator/MesosContainerizer API.
> > > >
> > > >   * [MESOS-3458] - Segfault when accepting or declining inverse
> offers.
> > > >
> > > >   * [MESOS-3474] - ExamplesTest.{TestFramework, JavaFramework,
> > > > PythonFramework} failed on CentOS 6.
> > > >
> > > >   * [MESOS-3489] - Add support for exposing Accept/Decline responses
> > for
> > > > inverse offers.
> > > >
> > > >   * [MESOS-3490] - Mesos UI fails to represent JSON entities.
> > > >
> > > >   * [MESOS-3512] - Don't retry close() on EINTR.
> > > >
> > > >   * [MESOS-3513] - Cgroups Test Filters aborts tests on Centos 6.6.
> > > >
> > > >   * [MESOS-3519] - Fix file descriptor leakage / double close in the
> > code
> > > > base.
> > > >
> > > >   * [MESOS-3538] -
> > > > CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy
> > > test
> > > > is flaky.
> > > >
> > > >   * [MESOS-3575] - V1 API java/python protos are not generated.
> > > >
> > > >
> > > > ** Improvements
> > > >
> > > >   * [MESOS-2719] - Deprecating '.json' extension in master endpoints

Re: error: 'sasl_errdetail' is deprecated: first deprecated in OS X 10.11

2015-10-12 Thread Marco Massenzio
I'm almost sure that you're running into
https://issues.apache.org/jira/browse/MESOS-3030
(there is a patch out to fix this: https://reviews.apache.org/r/39230/)

--
*Marco Massenzio*
Distributed Systems Engineer
http://codetrips.com

On Mon, Oct 12, 2015 at 4:54 PM, yuankui  wrote:

> hello,buddies
>
> I'm compiling mesos on mac os x 10.11 (EI capitan) and come across with
> some error as flowing
> version: mesos-0.24.0 & mesos-0.25.0-rc3
>
>
> /usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been
> explicitly marked deprecated here
> LIBSASL_API const char *sasl_errstring(int saslerr,
>^
> ../../src/authentication/cram_md5/authenticator.cpp:334:20: error:
> 'sasl_errdetail' is deprecated: first deprecated in OS X 10.11
>  [-Werror,-Wdeprecated-declarations]
>  string error(sasl_errdetail(connection));
>   ^
> /usr/include/sasl/sasl.h:770:25: note: 'sasl_errdetail' has been
> explicitly marked deprecated here
> LIBSASL_API const char *sasl_errdetail(sasl_conn_t *conn)
> __OSX_AVAILABLE_BUT_DEPRECATED(__MAC_10_0,__MAC_10_11,__IPHONE_NA,__IPHONE_NA);
>^
> ../../src/authentication/cram_md5/authenticator.cpp:514:18: error:
> 'sasl_server_init' is deprecated: first deprecated in OS X 10.11
>  [-Werror,-Wdeprecated-declarations]
>int result = sasl_server_init(NULL, "mesos");
> ^
> /usr/include/sasl/sasl.h:1016:17: note: 'sasl_server_init' has been
> explicitly marked deprecated here
> LIBSASL_API int sasl_server_init(const sasl_callback_t *callbacks,
>^
> ../../src/authentication/cram_md5/authenticator.cpp:519:11: error:
> 'sasl_errstring' is deprecated: first deprecated in OS X 10.11
>  [-Werror,-Wdeprecated-declarations]
>  sasl_errstring(result, NULL, NULL));
>  ^
> /usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been
> explicitly marked deprecated here
> LIBSASL_API const char *sasl_errstring(int saslerr,
>^
> ../../src/authentication/cram_md5/authenticator.cpp:521:16: error:
> 'sasl_auxprop_add_plugin' is deprecated: first deprecated in OS X 10.11
>  [-Werror,-Wdeprecated-declarations]
>  result = sasl_auxprop_add_plugin(
>   ^
> /usr/include/sasl/saslplug.h:1013:17: note: 'sasl_auxprop_add_plugin' has
> been explicitly marked deprecated here
> LIBSASL_API int sasl_auxprop_add_plugin(const char *plugname,
>^
> ../../src/authentication/cram_md5/authenticator.cpp:528:13: error:
> 'sasl_errstring' is deprecated: first deprecated in OS X 10.11
>  [-Werror,-Wdeprecated-declarations]
>sasl_errstring(result, NULL, NULL));
>^
> /usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been
> explicitly marked deprecated here
> LIBSASL_API const char *sasl_errstring(int saslerr,
>^
>
> as I'm not familiar with c++, I don't know how to solve this
>
> I believe I'm not the first one who came across with this problem, So I'm
> here for help!
> thanks.
>
>
>


Re: How production un-ready are Mesos Cassandra, Spark and Kafka Frameworks?

2015-10-12 Thread Dick Davies
Hi Chris



Spark is a Mesos native, I'd have no hesitation running it on Mesos.

Cassandra not so much -
that's not to disparage the work people are putting in there, I think
it's really interesting. But personally with complex beasts like Cassandra
I want to be running as 'stock' as possible, as it makes it easier to learn
from other peoples experiences.

On 12 October 2015 at 17:47, Chris Elsmore 
wrote:

> Hi all,
>
> Have just got back from a brilliant MesosCon Europe in Dublin, I learnt a
> huge amount and a big thank-you for putting on a great conference to all
> involved!
>
>
> I am looking to deploy a small (maybe 5 max) Cassandra & Spark cluster to
> do some data analysis at my current employer, and am a little unsure of the
> current status of the frameworks this would need to run on Mesos- both the
> mesosphere docs (which I’m guessing use the frameworks of the same name
> hosted on Github) and the Github ReadMes mention that these are not
> production ready, and the rough timeline of Q1 2016.
>
> I’m just wondering how production un-ready these are!? I am looking at
> using Mesos to deploy future stateless services in the next 6 months or so,
> and so I like the idea of adding to that system and the look of the
> configuration that is handled for you to bind nodes together in these
> frameworks. However it feels like for a smallish cluster of production
> ready machines it might be better to deploy them standalone and stay
> observant on the status of such things in the near future, and the
> configuration wins are not that large especially for a small cluster.
>
>
> Any experience and advice on the above would be greatly received!
>
>
> Chris
>
>
>
>


[RESULT][VOTE] Release Apache Mesos 0.25.0 (rc3)

2015-10-12 Thread Niklas Nielsen
Hi all,


The vote for Mesos 0.25.0 (rc3) has passed with the

following votes.


+1 (Binding)

--

Joris Van Remoortere

Michael Park

Brenden Matthews


+1 (Non-binding)

--

Kapil Arya


There were no 0 or -1 votes.


Please find the release at:

https://dist.apache.org/repos/dist/release/mesos/0.25.0


It is recommended to use a mirror to download the release:

http://www.apache.org/dyn/closer.cgi


The CHANGELOG for the release is available at:

https://git-wip-us.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=0.25.0


The mesos-0.25.0.jar has been released to:

https://repository.apache.org


The website (http://mesos.apache.org) will be updated shortly to reflect
this release.


Thanks,

Mpark, Joris and Niklas


Re: Can health-checks be run by Mesos for docker tasks?

2015-10-12 Thread Jay Taylor
Hi Haosdent and Mesos friends,

I've rebuilt the cluster from scratch and installed mesos 0.24.1 from the
mesosphere apt repo:

$ dpkg -l | grep mesos
ii  mesos   0.24.1-0.2.35.ubuntu1404
 amd64Cluster resource manager with efficient resource isolation

Then added the `launcher_dir' flag to /etc/mesos-slave/launcher_dir on the
slaves:

mesos-worker1a:~$ cat /etc/mesos-slave/launcher_dir
/usr/libexec/mesos

And yet the task health-checks are still being launched from the sandbox
directory like before!

I've also tested setting the MESOS_LAUNCHER_DIR env var and get the
identical result (just as before on the cluster where many versions of
mesos had been installed):

STDOUT:

--container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --docker="docker" --help="false" --initialize_driver_logging="true"
> --logbufsecs="0" --logging_level="INFO"
> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --stop_timeout="0ns"
> --container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --docker="docker" --help="false" --initialize_driver_logging="true"
> --logbufsecs="0" --logging_level="INFO"
> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --stop_timeout="0ns"
> Registered docker executor on mesos-worker1a
> Starting task hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91
> Launching health check process:
> /tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb/mesos-health-check
> --executor=(1)@192.168.225.58:48912
> --health_check_json={"command":{"shell":true,"value":"docker exec
> mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb
> sh -c \" curl --silent --show-error --fail --tcp-nodelay --head -X GET
> --user-agent flux-capacitor-health-checker --max-time 1 http:\/\/
> 127.0.0.1:8000
> \""},"consecutive_failures":6,"delay_seconds":15,"grace_period_seconds":10,"interval_seconds":1,"timeout_seconds":1}
> --task_id=hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91
> Health check process launched at pid: 11253



STDERR:

--container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --docker="docker" --help="false" --initialize_driver_logging="true"
> --logbufsecs="0" --logging_level="INFO"
> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --stop_timeout="0ns"
> --container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --docker="docker" --help="false" --initialize_driver_logging="true"
> --logbufsecs="0" --logging_level="INFO"
> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
> --stop_timeout="0ns"
> Registered docker executor on mesos-worker1a
> Starting task hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91
> *Launching health check process:
> /tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb/mesos-health-check*
> --executor=(1)@192.168.225.58:48912
> --health_check_json={"command":{"shell":true,"value":"dock

Re: Can health-checks be run by Mesos for docker tasks?

2015-10-12 Thread Marco Massenzio
Are those the stdout logs of the Agent? Because I don't see the
--launcher-dir set, however, if I look into one that is running off the
same 0.24.1 package, this is what I see:

I1012 14:56:36.933856  1704 slave.cpp:191] Flags at startup:
--appc_store_dir="/tmp/mesos/store/appc"
--attributes="rack:r2d2;pod:demo,dev" --authenticatee="crammd5"
--cgroups_cpu_enable_pids_and_tids_count="false"
--cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
--cgroups_limit_swap="false" --cgroups_root="mesos"
--container_disk_watch_interval="15secs" --containerizers="docker,mesos"
--default_role="*" --disk_watch_interval="1mins" --docker="docker"
--docker_kill_orphans="true" --docker_remove_delay="6hrs"
--docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
--enforce_container_disk_quota="false"
--executor_registration_timeout="1mins"
--executor_shutdown_grace_period="5secs"
--fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB"
--frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
--hadoop_home="" --help="false" --initialize_driver_logging="true"
--ip="192.168.33.11" --isolation="cgroups/cpu,cgroups/mem"
--launcher_dir="/usr/libexec/mesos"
--log_dir="/var/local/mesos/logs/agent" --logbufsecs="0"
--logging_level="INFO" --master="zk://192.168.33.1:2181/mesos/vagrant"
--oversubscribed_resources_interval="15secs" --perf_duration="10secs"
--perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
--quiet="false" --recover="reconnect" --recovery_timeout="15mins"
--registration_backoff_factor="1secs"
--resource_monitoring_interval="1secs"
--resources="ports:[9000-1];ephemeral_ports:[32768-57344]"
--revocable_cpu_low_priority="true"
--sandbox_directory="/var/local/sandbox" --strict="true"
--switch_user="true" --version="false" --work_dir="/var/local/mesos/agent"
(this is run off the Vagrantfile at [0] in case you want to reproduce).
That agent is not run via the init command, though, I execute it manually
via the `run-agent.sh` in the same directory.

I don't really think this matters, but I assume you also restarted the
agent after making the config changes?
(and, for your own sanity - you can double check the version by looking at
the very head of the logs).






--
*Marco Massenzio*
Distributed Systems Engineer
http://codetrips.com

On Mon, Oct 12, 2015 at 10:50 PM, Jay Taylor <outtat...@gmail.com> wrote:

> Hi Haosdent and Mesos friends,
>
> I've rebuilt the cluster from scratch and installed mesos 0.24.1 from the
> mesosphere apt repo:
>
> $ dpkg -l | grep mesos
> ii  mesos   0.24.1-0.2.35.ubuntu1404
>  amd64    Cluster resource manager with efficient resource isolation
>
> Then added the `launcher_dir' flag to /etc/mesos-slave/launcher_dir on the
> slaves:
>
> mesos-worker1a:~$ cat /etc/mesos-slave/launcher_dir
> /usr/libexec/mesos
>
> And yet the task health-checks are still being launched from the sandbox
> directory like before!
>
> I've also tested setting the MESOS_LAUNCHER_DIR env var and get the
> identical result (just as before on the cluster where many versions of
> mesos had been installed):
>
> STDOUT:
>
> --container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
>> --docker="docker" --help="false" --initialize_driver_logging="true"
>> --logbufsecs="0" --logging_level="INFO"
>> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
>> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
>> --stop_timeout="0ns"
>> --container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
>> --docker="docker" --help="false" --initialize_driver_logging="true"
>> --logbufsecs="0" --logging_level="INFO"
>> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
>> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73

Re: Can health-checks be run by Mesos for docker tasks?

2015-10-12 Thread Marco Massenzio
On Mon, Oct 12, 2015 at 11:26 PM, Marco Massenzio <ma...@mesosphere.io>
wrote:

> Are those the stdout logs of the Agent? Because I don't see the
> --launcher-dir set, however, if I look into one that is running off the
> same 0.24.1 package, this is what I see:
>
> I1012 14:56:36.933856  1704 slave.cpp:191] Flags at startup:
> --appc_store_dir="/tmp/mesos/store/appc"
> --attributes="rack:r2d2;pod:demo,dev" --authenticatee="crammd5"
> --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos"
> --container_disk_watch_interval="15secs" --containerizers="docker,mesos"
> --default_role="*" --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins"
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB"
> --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
> --hadoop_home="" --help="false" --initialize_driver_logging="true"
> --ip="192.168.33.11" --isolation="cgroups/cpu,cgroups/mem"
> --launcher_dir="/usr/libexec/mesos"
> --log_dir="/var/local/mesos/logs/agent" --logbufsecs="0"
> --logging_level="INFO" --master="zk://192.168.33.1:2181/mesos/vagrant"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="1secs"
> --resource_monitoring_interval="1secs"
> --resources="ports:[9000-1];ephemeral_ports:[32768-57344]"
> --revocable_cpu_low_priority="true"
> --sandbox_directory="/var/local/sandbox" --strict="true"
> --switch_user="true" --version="false" --work_dir="/var/local/mesos/agent"
> (this is run off the Vagrantfile at [0] in case you want to reproduce).
> That agent is not run via the init command, though, I execute it manually
> via the `run-agent.sh` in the same directory.
>
> I don't really think this matters, but I assume you also restarted the
> agent after making the config changes?
> (and, for your own sanity - you can double check the version by looking at
> the very head of the logs).
>
>
> [0] http://github.com/massenz/zk-mesos

>
>
>
>
> --
> *Marco Massenzio*
> Distributed Systems Engineer
> http://codetrips.com
>
> On Mon, Oct 12, 2015 at 10:50 PM, Jay Taylor <outtat...@gmail.com> wrote:
>
>> Hi Haosdent and Mesos friends,
>>
>> I've rebuilt the cluster from scratch and installed mesos 0.24.1 from the
>> mesosphere apt repo:
>>
>> $ dpkg -l | grep mesos
>> ii  mesos   0.24.1-0.2.35.ubuntu1404
>>amd64Cluster resource manager with efficient resource isolation
>>
>> Then added the `launcher_dir' flag to /etc/mesos-slave/launcher_dir on
>> the slaves:
>>
>> mesos-worker1a:~$ cat /etc/mesos-slave/launcher_dir
>> /usr/libexec/mesos
>>
>> And yet the task health-checks are still being launched from the sandbox
>> directory like before!
>>
>> I've also tested setting the MESOS_LAUNCHER_DIR env var and get the
>> identical result (just as before on the cluster where many versions of
>> mesos had been installed):
>>
>> STDOUT:
>>
>> --container="mesos-20151012-184440-1625401536-5050-23953-S0.62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
>>> --docker="docker" --help="false" --initialize_driver_logging="true"
>>> --logbufsecs="0" --logging_level="INFO"
>>> --mapped_directory="/mnt/mesos/sandbox" --quiet="false"
>>> --sandbox_directory="/tmp/mesos/slaves/20151012-184440-1625401536-5050-23953-S0/frameworks/20151012-184440-1625401536-5050-23953-/executors/hello-app_web-v3.33597b73-1943-41b4-a308-76132eebcc91/runs/62d43b8f-6cd1-4c53-9ac8-84dbfc45bbcb"
>>> --stop_timeout="0ns"
>>> --container="mesos-20151012-184440-162540

Re: error: 'sasl_errdetail' is deprecated: first deprecated in OS X 10.11

2015-10-12 Thread yuankui
YES!

finally, I run config configure with following options

`../configure CXXFLAGS=-Wno-deprecated-declarations`

and it solve my problem!

thx

在 2015年10月13日,上午1:44,Marco Massenzio  写道:

I'm almost sure that you're running into 
https://issues.apache.org/jira/browse/MESOS-3030 

(there is a patch out to fix this: https://reviews.apache.org/r/39230/ 
)

--
Marco Massenzio
Distributed Systems Engineer
http://codetrips.com 
On Mon, Oct 12, 2015 at 4:54 PM, yuankui > wrote:
hello,buddies

I'm compiling mesos on mac os x 10.11 (EI capitan) and come across with some 
error as flowing
version: mesos-0.24.0 & mesos-0.25.0-rc3


/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^
../../src/authentication/cram_md5/authenticator.cpp:334:20: error: 
'sasl_errdetail' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 string error(sasl_errdetail(connection));
  ^
/usr/include/sasl/sasl.h:770:25: note: 'sasl_errdetail' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errdetail(sasl_conn_t *conn) 
__OSX_AVAILABLE_BUT_DEPRECATED(__MAC_10_0,__MAC_10_11,__IPHONE_NA,__IPHONE_NA);
   ^
../../src/authentication/cram_md5/authenticator.cpp:514:18: error: 
'sasl_server_init' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
   int result = sasl_server_init(NULL, "mesos");
^
/usr/include/sasl/sasl.h:1016:17: note: 'sasl_server_init' has been explicitly 
marked deprecated here
LIBSASL_API int sasl_server_init(const sasl_callback_t *callbacks,
   ^
../../src/authentication/cram_md5/authenticator.cpp:519:11: error: 
'sasl_errstring' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 sasl_errstring(result, NULL, NULL));
 ^
/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^
../../src/authentication/cram_md5/authenticator.cpp:521:16: error: 
'sasl_auxprop_add_plugin' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
 result = sasl_auxprop_add_plugin(
  ^
/usr/include/sasl/saslplug.h:1013:17: note: 'sasl_auxprop_add_plugin' has been 
explicitly marked deprecated here
LIBSASL_API int sasl_auxprop_add_plugin(const char *plugname,
   ^
../../src/authentication/cram_md5/authenticator.cpp:528:13: error: 
'sasl_errstring' is deprecated: first deprecated in OS X 10.11
 [-Werror,-Wdeprecated-declarations]
   sasl_errstring(result, NULL, NULL));
   ^
/usr/include/sasl/sasl.h:757:25: note: 'sasl_errstring' has been explicitly 
marked deprecated here
LIBSASL_API const char *sasl_errstring(int saslerr,
   ^

as I'm not familiar with c++, I don't know how to solve this

I believe I'm not the first one who came across with this problem, So I'm here 
for help!
thanks.