Re: [VOTE] Release Apache Mesos 1.0.3 (rc1)

2017-01-24 Thread Avinash Sridharan
-1 (Non-binding)

I realized that we missed the following commit in 1.0.3:
https://github.com/apache/mesos/commit/3e52a107c4073778de9c14bf5fcdeb6e342821aa

This commit fixes a bug seen in CoreOS because of which containers cannot
be launched on a CNI network. While specific to CoreOS, the bug can
manifest itself in any distro that does not setup `/etc/hosts`/ and
`/etc/hostname` by default. Hence, wanted this commit to be backported to
1.0.3.

Thanks,
Avinash


On Thu, Jan 19, 2017 at 8:43 AM, Vinod Kone  wrote:

> Hi all,
>
>
> Please vote on releasing the following candidate as Apache Mesos 1.0.3.
>
>
> 1.0.3 includes the following:
>
> 
> 
>
> * [MESOS-6142] - Frameworks may RESERVE for an arbitrary role.
>
>
> * [MESOS-6621] - SSL downgrade path will CHECK-fail when using both
> temporary and persistent sockets
>
> * [MESOS-6676] - Always re-link with scheduler during re-registration.
>
>
> * [MESOS-6917] - Segfault when the executor sets an invalid UUID when
> sending a status update.
>
>
> The CHANGELOG for the release is available at:
>
> https://git-wip-us.apache.org/repos/asf?p=mesos.git;a=blob_
> plain;f=CHANGELOG;hb=1.0.3-rc1
>
> 
> 
>
>
> The candidate for Mesos 1.0.3 release is available at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.0.3-rc1/mesos-1.0.3.tar.gz
>
>
> The tag to be voted on is 1.0.3-rc1:
>
> https://git-wip-us.apache.org/repos/asf?p=mesos.git;a=commit;h=1.0.3-rc1
>
>
> The MD5 checksum of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.0.3-rc1/
> mesos-1.0.3.tar.gz.md5
>
>
> The signature of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.0.3-rc1/
> mesos-1.0.3.tar.gz.asc
>
>
> The PGP key used to sign the release is here:
>
> https://dist.apache.org/repos/dist/release/mesos/KEYS
>
>
> The JAR is up in Maven in a staging repository here:
>
> https://repository.apache.org/content/repositories/orgapachemesos-1172
>
>
> Please vote on releasing this package as Apache Mesos 1.0.3!
>
>
> The vote is open until Mon Jan 23rd 17:00:00 PST 2017 and passes if a
> majority of at least 3 +1 PMC votes are cast.
>
>
> [ ] +1 Release this package as Apache Mesos 1.0.3
>
> [ ] -1 Do not release this package because ...
>
>
> Thanks,
>



-- 
Avinash Sridharan, Mesosphere
+1 (323) 702 5245


Re: best practices for log rotation

2017-01-24 Thread Cody Maloney
For DC/OS we do two pieces. We simplify Mesos' log output via a module so
that it doesn't have any of it's internal logrotate logic, and just writes
to a single straight output file. We also include a mesos module so that
mesos task output goes to systemd journald, making it so every piece of
logging inside DC/OS ends up at a single source which can then be piped
wherever people like. The modules:
https://github.com/dcos/dcos-mesos-modules

Systemd auto-rotates the journald logs and does cap sized logging with it.
For the non-systemd logs we package logrotate (
https://github.com/dcos/dcos/tree/master/packages/logrotate) + a simple
helper script to catch accidentally orphaned files.

On Tue, Jan 24, 2017 at 9:40 AM Tomek Janiszewski  wrote:

> We are using logrotate to rotate, compress and delete old data. To keep
> logs easier to search we put them into Elastic/Kibana.
>
> wt., 24.01.2017, 18:20 użytkownik Charles Allen <
> charles.al...@metamarkets.com> napisał:
>
> Anyone have good hints for best practices for log rotation?
>
> Our mesos master ended up with many gigabytes of logs once we started
> running SPARK on it (approx 2GB of master INFO logs per day).
>
>


Re: best practices for log rotation

2017-01-24 Thread Tomek Janiszewski
We are using logrotate to rotate, compress and delete old data. To keep
logs easier to search we put them into Elastic/Kibana.

wt., 24.01.2017, 18:20 użytkownik Charles Allen <
charles.al...@metamarkets.com> napisał:

> Anyone have good hints for best practices for log rotation?
>
> Our mesos master ended up with many gigabytes of logs once we started
> running SPARK on it (approx 2GB of master INFO logs per day).
>


best practices for log rotation

2017-01-24 Thread Charles Allen
Anyone have good hints for best practices for log rotation?

Our mesos master ended up with many gigabytes of logs once we started
running SPARK on it (approx 2GB of master INFO logs per day).


Re: Question on dynamic reservations

2017-01-24 Thread Gabriel Hartmann
Totally agree that ultimately a single way of dealing with offers is
wanted. Quota does seem like a way forward although its lack of chunkiness
dilutes to what extent it guarantees progress.
On Wed, Jan 18, 2017 at 1:59 AM Povilas Versockas 
wrote:

> Thanks for information and ideas! I think labeled static reservation would
> help if framework could modify labels during runtime.
>
> Personally I think statically reserved resources should be offered only to
> frameworks but look like free (role:*) resources. Then framework developers
> could use labeled dynamic reservations and there won’t be any code
> differences handling static reservations and dynamic reservations. And this
> would make current frameworks like Cassandra or dcos-commons library
> support static reservation.
>
> Given the current situation it looks like quota may be a solution for me.
> The current idea is to create a custom resource on my mesos-agents and set
> a quota.
>
> Example:
>
> Set mesos-agent with flag:
>
> --resources
> cpus(*):8;mem(*):4096;disk(*):4096;ports(*):[31000-32000];mysql(*):1
>
>
> And then using operators API set quota:
>
> {
>
>"role": “role”,
>
>"guarantee": [
>
>  {
>
>"name": "cpus",
>
>"type": "SCALAR",
>
>"scalar": { "value": 8 }
>
>  },
>
>  {
>
>"name": "mem",
>
>"type": "SCALAR",
>
>"scalar": { "value": 4096 }
>
>  },
>
> …
>
>  {
>
>"name": “mysql”,
>
>"type": "SCALAR",
>
>"scalar": { "value": 1 }
>
>  }
>
>]
>
>  }
>
>
> This should make mesos-master save agent’s resources for my role and let
> me use dynamic reservations.
>
>
>
> On Wed, Jan 18, 2017 at 2:29 AM, Greg Mann  wrote:
>
> Thanks Gabriel, that makes sense. It sounds like labels on static
> reservations might be the most expedient path toward a solution to this
> problem, but that is not without its complications, as suggested in the
> related ticket which Neil filed a while back:
> https://issues.apache.org/jira/browse/MESOS-4476
>
> Povilas, also see this related ticket that Gabriel pointed me to:
> https://issues.apache.org/jira/browse/MESOS-6939
>
> It sounds like this is a real issue for stateful framework developers, so
> hopefully we will find some time soon to implement a solution. In the
> meantime, Povilas, I'm afraid to say I don't know exactly what solution to
> recommend. If anybody else in the community has some ideas, it would be
> great to hear them :)
>
> Cheers,
> Greg
>
>
> On Tue, Jan 17, 2017 at 2:52 PM, Gabriel Hartmann 
> wrote:
>
> @Greg: The reason people use static reservation is to enforce that
> particular resources (usually disks) can only be consumed by a particular
> framework.  They also don't know when the stateful service is going to be
> installed necessarily so they don't want to race with other frameworks to
> consume those special resources.  So static reservation is desirable.
> However, all stateful services also need more information about reserved
> resources than is natively provided by Mesos in the static reservation case
> (i.e. the labels he describes).  `dcos-commons` does the same thing.
> Various work arounds exist, but none are able to provide resource
> allocation enforcement because only roles do that.  An alternate resource
> allocation enforcement mechanism is needed.  Usually this is the part where
> people start talking about quota.
>
> Neither option 1 nor option 2 provided a race proof way to get fully
> labeled reserved resources.  It's been proposed in the past that it be
> allowed to add labels to statically reserved resources.  That's kind of
> fine except now you have these things that can't really be UNRESERVEd but
> look exactly like dynamic resources which can...
>
> Quota w/ chunks as a step in the deployment of stateful services is very
> desirable in an adversarial environment.  However if your'e in a
> cooperative environment (i.e. you're not in an adversarial relationship
> with other frameworks) if you had resources (particularly disk resources)
> with attributes on them you could have frameworks voluntarily choose not to
> consume resources not meant for them.
>
> e.g. Disk resource has attribute `CASSANDRA`.  Ok, since I'm a Kafka
> framework I won't go use that disk.
>
> On Tue, Jan 17, 2017 at 11:24 AM Greg Mann  wrote:
>
> Hi Povilas,
> Another approach you could try is to use dynamic reservations only. You
> could either:
>
>1. Alter your stateful framework to dynamically reserve the resources
>that it needs, or
>2. Add a script to your cluster tooling that would make use of the
>operator endpoint for dynamic reservations [1]
> to
>dynamically reserve the stateful framework's resources when your cluster is
>initially provisioned. This would have a similar effect to static
>reservations, but would allow you to set labels
>
> Approach #1 makes sense to me; is there a reas