Re: [VOTE] Move Apache Mesos to Attic

2021-04-06 Thread Meng Zhu
+1

It has been a pleasure working with you all!

On Mon, Apr 5, 2021 at 10:58 AM Vinod Kone  wrote:

> Hi folks,
>
> Based on the recent conversations
> <
> https://lists.apache.org/thread.html/raed89cc5ab78531c48f56aa1989e1e7eb05f89a6941e38e9bc8803ff%40%3Cuser.mesos.apache.org%3E
> >
> on our mailing list, it seems to me that the majority consensus among the
> existing PMC is to move the project to the attic <
> https://attic.apache.org/>
> and let the interested community members collaborate on a fork in Github.
>
> I would like to call a vote to dissolve the PMC and move the project to the
> attic.
>
> Please reply to this thread with your vote. Only binding votes from
> PMC/committers count towards the final tally but everyone in the community
> is encouraged to vote. See process here
> .
>
> Thanks,
>


Re: [VOTE] Release Apache Mesos 1.8.1 (rc1)

2019-07-16 Thread Meng Zhu
+1

tested on centos 7.4, only known flakies:

[  PASSED  ] 466 tests.
[  FAILED  ] 7 tests, listed below:
[  FAILED  ] CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs
[  FAILED  ] CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen
[  FAILED  ] DockerVolumeIsolatorTest.ROOT_CommandTaskNoRootfsWithVolumes
[  FAILED  ] DockerVolumeIsolatorTest.ROOT_CommandTaskNoRootfsSlaveRecovery
[  FAILED  ] DockerVolumeIsolatorTest.ROOT_EmptyCheckpointFileSlaveRecovery
[  FAILED  ]
DockerVolumeIsolatorTest.ROOT_CommandTaskNoRootfsSingleVolumeMultipleContainers
[  FAILED  ]
NvidiaGpuTest.ROOT_INTERNET_CURL_CGROUPS_NVIDIA_GPU_TensorflowGpuImage

-Meng

On Wed, Jul 10, 2019 at 1:48 PM Vinod Kone  wrote:

> +1 (binding).
>
> Tested in ASF CI. One build failed due to known flaky test
> https://issues.apache.org/jira/browse/MESOS-9594
>
>
> *Revision*: 4ae06448466408d9ec96ede953208057609f0744
>
>- refs/tags/1.8.1-rc1
>
> Configuration Matrix gcc clang
> centos:7 --verbose --disable-libtool-wrappers
> --disable-parallel-test-execution --enable-libevent --enable-ssl autotools
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Not run]
> cmake
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Not run]
> --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> autotools
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Not run]
> cmake
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Not run]
> ubuntu:16.04 --verbose --disable-libtool-wrappers
> --disable-parallel-test-execution --enable-libevent --enable-ssl autotools
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> cmake
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Failed]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> autotools
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> [image: Success]
> <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/71/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!u

Re: Simulate Mesos for Large Scale Framework Test

2019-07-10 Thread Meng Zhu
Hi Wuyang:

I am not sure if the design was actually implemented. At least, I am not
aware of such simulators.

If you want to simulate with your own framework, one possibility is to run
multiple agents with fake resources on the same (or a few) node.

Alternatively, we have a lightweight benchmark test suite inside Mesos for
the allocator:
https://github.com/apache/mesos/blob/master/src/tests/hierarchical_allocator_benchmarks.cpp#L128
You can easily specify agent and framework

profiles
and spawn them. However, currently, it is not possible to directly plug-in
your own framework. Here is an example simulation

that
uses the fixture.
You can extend the framework profile and fixture to either encode more
behaviors per your preference.

In both cases, the Mesos default allocator is used. You can change the
allocation algorithm by using your own allocator implementation.

Hope it helps.

-Meng

On Tue, Jul 9, 2019 at 5:06 PM uyang Zhang W 
wrote:

> Dear all,
>
> I developed a framework and would like to test the scheduling algorithm by
> simulating it in a large scale environment.
>
> I found an old doc at
> https://docs.google.com/document/d/1Ygq9MPWrqcQLf0J-mraVeEIRYk3xNXXQ0xRHFqTiXsQ/edit#.
> It basically satisfies all the requirements. However, I cannot find any
> implementations.
>
> To specify, I have the following expectations.
> 1) Simulating 1k nodes with heterogeneous resources.
> 2) Loading  job traces with defined running time.
> 3) Test the scheduling algorithm in this simulated environment.
>
> Can you please give a pointer to do that?
>
> Best,
> Wuyang
>


Upcoming changes to the `/role` endpoint and `GET_QUOTA` call

2019-06-21 Thread Meng Zhu
Hi:

We are making some changes to the response of *`/role` endpoint and master
`GET_QUOTA` call.* These are necessary as part of the quota limits work
.
Despite efforts in keeping these as backward compatible as possible, there
are some small incompatible tweaks. If you have tooling that depend on
these endpoints, please update them accordingly. Please check out the API
section of the design doc

for more details as well as rational behind these changes. Also, feel free
to reach out if you have any questions or concerns.

Changes to the `/role` endpoint:
- The `principal` field will be removed in the quota object
- Resources with zero quantity will no longer be included in the
`guarantee` field.
- The `guarantee` field will continue to be filled. However, since we are
decoupling the quota guarantee from the limit. One can no longer assume
that the limit will be the same as guarantee. A separate `limit` field will
be introduced.

*Before, *the response might contain:
```
{
  "quota": {
"guarantee": {
  "cpus": 1,
  "disk": 0,
  "gpus": 0,
  "mem": 512
},
"principal": "test-principal",
"role": "foo"
  }
}
```
*After*:
```
{
  "quota": {
"guarantee": {
  "cpus": 1,
  "mem": 512
},
"limit": {
  "cpus": 1,
  "mem": 512
},
"role": "foo"
  }
}
```

Changes to the `GET_QUOTA
`
call:
The `QuotaInfo` field is going to be deprecated, replaced by `QuotaConfig`
.
But we will continue to fill in as much as we can. Similar to the `/role`
endpoint above:
- The `principal` field will no longer be filled in the `QuotaInfo` object.
- The `guarantee` field will continue to be filled. However, since we are
decoupling the quota guarantee from the limit. One can no longer assume
that the limit will be the same as guarantee.

Thanks,
Meng


Upcoming allocator change to clusters using oversubscribed resources with quota under DRF

2019-05-29 Thread Meng Zhu
Folks:

If you are not using oversubscribed resources

along
with quota under DRF (all three at the same time), read no further. Just
stay tuned for the upcoming shiny new allocator with decoupled quota
guarantees and limits :)

OK, for the rest of you, you are truly advanced users! Here is the news.

As part of the tech debt cleanup in the allocator, we plan to remove the
quota role sorter in the allocator and only keep a single role sorter for
all the roles.This would simplify the allocator logic to help speedup
feature development.

This will result in one behavior change if you are using oversubscribed
resources with quota under DRF. Previously, in the quota allocation stage,
revocable resources are counted towards *neither* the total resource pool
*nor* a role’s allocated resources when sorting with DRF. This is arguably
the right behavior. However, after the aforementioned removal, all
resources, both revocable and non-revocable ones, will be counted when
calculating DRF shares in the quota allocation stage. This means, for a
quota role that also consumes a lot of revocable resources but no-so-much
non-revocable ones, previously it would be sorted towards the head of the
queue, now it is likely to be sorted towards the tail of the queue.

If you have concerns over this behavior change, feel free to chime in and
reach out.

Link to the ticket: MESOS-9802


-Meng


Re: [VOTE] Release Apache Mesos 1.5.3 (rc1)

2019-03-13 Thread Meng Zhu
+1
sudo make check on CentOS 7.4, only known flaky tests failed

On Tue, Mar 12, 2019 at 4:44 PM Gilbert Song  wrote:

> +1 (binding).
>
> -Gilbert
>
> On Thu, Mar 7, 2019 at 10:09 AM Greg Mann  wrote:
>
> > +1 (binding)
> >
> > Ran through internal CI and observed only known flaky tests; almost all
> > configurations passed with no failures.
> >
> > Cheers,
> > Greg
> >
> > On Thu, Mar 7, 2019 at 1:55 AM Vinod Kone  wrote:
> >
> > > +1 (binding)
> > >
> > > Ran in ASF CI. Saw some flaky tests but otherwise looks good.
> > >
> > > *Revision*: b1dbba03af23b0222d11f2b7ae936d77ef42650d
> > >
> > >- refs/tags/1.5.3-rc1
> > >
> > > Configuration Matrix gcc clang
> > > centos:7 --verbose --disable-libtool-wrappers
> > > --disable-parallel-test-execution --enable-libevent --enable-ssl
> > autotools
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Not run]
> > > cmake
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Not run]
> > > --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> > > autotools
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Not run]
> > > cmake
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Not run]
> > > ubuntu:16.04 --verbose --disable-libtool-wrappers
> > > --disable-parallel-test-execution --enable-libevent --enable-ssl
> > autotools
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > cmake
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> > > autotools
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > [image: Success]
> > > <
> >
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/67/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> > >
> > > cmake
> > > [image: Success]
> > > <

Re: Mesos Master Crashes when Task launched with LAUNCH_GROUP fails

2019-02-28 Thread Meng Zhu
Hi Nimi:

Thanks for reporting this.

>From the log snippet, looks like, when de-allocating resources, the agent
does not have the port resources that is supposed to have been allocated.
Can you provide the master log (which at least covers the period from when
the resources on the agent is offered to the crash point)? Also, can you
create a JIRA ticket and upload the log to there? (
https://issues.apache.org/jira/projects/MESOS/issues)

-Meng

On Thu, Feb 28, 2019 at 1:58 PM Nimi W  wrote:

> Hi,
>
> Mesos: 1.7.1
>
> I'm trying to debug an issue where if I launch a task using the
> LAUNCH_GROUP method,
> and the task fails to start, the mesos master will crash. I am using a
> custom framework
> I've built using the HTTP Scheduler API.
>
> When my framework received an offer - I return with an ACCEPT with this
> JSON:
>
> https://gist.github.com/nemosupremo/3b23c4e1ca0ab241376aa5b975993270
>
> I then receive the following UPDATE events:
>
> TASK_STARTING
> TASK_RUNNING
> TASK_FAILED
>
> My framework then immediately tries to relaunch the task on the next
> OFFERS:
>
> https://gist.github.com/nemosupremo/2b02443241c3bd002f04be034d8e64f7
>
> But between sometime when I get that event and try to acknowledge the
> TASK_FAILED event,
> the mesos master crashes with:
>
> Feb 28 21:34:02 master03 mesos-master[7124]: F0228 21:34:02.118693  7142
> sorter.hpp:357] Check failed: resources.at(slaveId).contains(toRemove)
> Resources disk(allocated: faust)(reservations: [(STATIC,faust)]):1;
> cpus(allocated: faust)(reservations: [(STATIC,faust)]):0.1; mem(allocated:
> faust)(reservations: [(STATIC,faust)]):64 at agent
> 643078ba-8cb8-4582-b9c3-345d602506c8-S0 does not contain cpus(allocated:
> faust)(reservations: [(STATIC,faust)]):0.1; mem(allocated:
> faust)(reservations: [(STATIC,faust)]):64; disk(allocated:
> faust)(reservations: [(STATIC,faust)]):1; ports(allocated:
> faust)(reservations: [(STATIC,faust)]):[-]
> Feb 28 21:34:02 master03 mesos-master[7124]: *** Check failure stack
> trace: ***
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd935e48d
> google::LogMessage::Fail()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd9360240
> google::LogMessage::SendToLog()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd935e073
> google::LogMessage::Flush()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd9360c69
> google::LogMessageFatal::~LogMessageFatal()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd83d85f8
> mesos::internal::master::allocator::DRFSorter::unallocated()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd83a78af
> mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::untrackAllocatedResources()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd83ba281
> mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::recoverResources()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd92a6631
> process::ProcessBase::consume()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd92c878a
> process::ProcessManager::resume()
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd92cc4d6
> _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUlvE_vEEE6_M_runEv
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd6289c80
> (unknown)
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd5da56ba
> start_thread
> Feb 28 21:34:02 master03 mesos-master[7124]: @ 0x7f1fd5adb41d
> (unknown)
> Feb 28 21:34:02 master03 systemd[1]: mesos-master.service: Main process
> exited, code=killed, status=6/ABRT
> Feb 28 21:34:02 master03 systemd[1]: mesos-master.service: Unit entered
> failed state.
> Feb 28 21:34:02 master03 systemd[1]: mesos-master.service: Failed with
> result 'signal'.
>
> The entire process works with the older LAUNCH API (for some reason the
> docker task crashes with filesystem permission issues when using
> LAUNCH_GROUPS)
>


[RESULT][VOTE] Release Apache Mesos 1.4.3 (rc2)

2019-02-22 Thread Meng Zhu
Hi all,

The vote for Mesos 1.4.3 (rc2) has passed with the
following votes.

+1 (Binding)
--
Vinod Kone
Gastón Kleiman
Gilbert Song

There were no 0 or -1 votes.

Please find the release at:
https://dist.apache.org/repos/dist/release/mesos/1.4.3

It is recommended to use a mirror to download the release:
http://www.apache.org/dyn/closer.cgi

The CHANGELOG for the release is available at:
https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.4.3

The mesos-1.4.3.jar has been released to:
https://repository.apache.org

The website (http://mesos.apache.org) will be updated shortly to reflect
this release.

Thanks,
Meng


Re: [VOTE] Release Apache Mesos 1.7.2 (rc1)

2019-02-22 Thread Meng Zhu
+1, ran through out internal CI, only flaky failures

On Thu, Feb 21, 2019 at 11:41 AM Greg Mann  wrote:

> +1
>
> Built on CentOS 7.4 and ran all tests as root. Only 3 test failures were
> observed, all known flakes.
>
> Cheers,
> Greg
>
> On Wed, Feb 20, 2019 at 7:12 AM Vinod Kone  wrote:
>
>> +1
>>
>> Ran this on ASF CI.
>>
>> The red builds are a flaky infra issue and a known flaky test
>> .
>>
>> *Revision*: 58cc918e9acc2865bb07047d3d2dff156d1708b2
>>
>>- refs/tags/1.7.2-rc1
>>
>> Configuration Matrix gcc clang
>> centos:7 --verbose --disable-libtool-wrappers
>> --disable-parallel-test-execution --enable-libevent --enable-ssl autotools
>> [image: Failed]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Not run]
>> cmake
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Not run]
>> --verbose --disable-libtool-wrappers --disable-parallel-test-execution
>> autotools
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Not run]
>> cmake
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Not run]
>> ubuntu:16.04 --verbose --disable-libtool-wrappers
>> --disable-parallel-test-execution --enable-libevent --enable-ssl autotools
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> cmake
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> --verbose --disable-libtool-wrappers --disable-parallel-test-execution
>> autotools
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
>> >
>> cmake
>> [image: Success]
>> <
>> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/66/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(d

Re: [VOTE] Release Apache Mesos 1.6.2 (rc1)

2019-02-20 Thread Meng Zhu
+1 -- ran on centos 7.4 with only known flaky tests

-Meng

On Wed, Feb 20, 2019 at 4:57 PM Gastón Kleiman  wrote:

> +1 (binding) — ran the build through Mesosphere's internal CI and only two
> known flaky tests failed.
>
> On Tue, Feb 19, 2019 at 11:56 AM Greg Mann  wrote:
>
>> Hi all,
>>
>> Please vote on releasing the following candidate as Apache Mesos 1.6.2.
>>
>>
>> 1.6.2 includes a number of bug fixes since 1.6.1; the CHANGELOG for the
>> release is available at:
>>
>> https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.6.2-rc1
>>
>> 
>>
>> The candidate for Mesos 1.6.2 release is available at:
>> https://dist.apache.org/repos/dist/dev/mesos/1.6.2-rc1/mesos-1.6.2.tar.gz
>>
>> The tag to be voted on is 1.6.2-rc1:
>> https://gitbox.apache.org/repos/asf?p=mesos.git;a=commit;h=1.6.2-rc1
>>
>> The SHA512 checksum of the tarball can be found at:
>>
>> https://dist.apache.org/repos/dist/dev/mesos/1.6.2-rc1/mesos-1.6.2.tar.gz.sha512
>>
>> The signature of the tarball can be found at:
>>
>> https://dist.apache.org/repos/dist/dev/mesos/1.6.2-rc1/mesos-1.6.2.tar.gz.asc
>>
>> The PGP key used to sign the release is here:
>> https://dist.apache.org/repos/dist/release/mesos/KEYS
>>
>> The JAR is in a staging repository here:
>> https://repository.apache.org/content/repositories/orgapachemesos-1246
>>
>> Please vote on releasing this package as Apache Mesos 1.6.2!
>>
>> The vote is open until Fri Feb 22 11:54 PST 2019, and passes if a
>> majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Mesos 1.6.2
>> [ ] -1 Do not release this package because ...
>>
>> Thanks,
>> Greg
>>
>


Re: Mesos webinterface showing agents unreachable 2

2019-02-14 Thread Meng Zhu
Hi Marc:

Those are likely to be old unreachable agent entries. Their GC timing are
controlled by the flag `--registry_max_agent_age`, the default value is two
weeks.

http://mesos.apache.org/documentation/latest/configuration/master/

-Meng

On Thu, Feb 14, 2019 at 12:10 PM Marc Roos  wrote:

>
>
> In my test setup I have 3 activated agents and 2 unreachable. Although
> tasks are being deployed among all 3. I take it unreachable should be 0.
> How should I troubleshoot this? Could these 2 be some old hosts that I
> maybe installed in the past?
>
> Agents
> Activated 3
> Deactivated   0
> Unreachable   2
>
>
>
>


[VOTE] Release Apache Mesos 1.4.3 (rc2)

2019-02-13 Thread Meng Zhu
Hi all,

Please vote on releasing the following candidate as Apache Mesos 1.4.3.

1.4.3 includes the following:

https://issues.apache.org/jira/issues/?filter=12345433

The CHANGELOG for the release is available at:
https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.4.3-rc2


The candidate for Mesos 1.4.3 release is available at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc2/mesos-1.4.3.tar.gz

The tag to be voted on is 1.4.3-rc2:
https://gitbox.apache.org/repos/asf?p=mesos.git;a=commit;h=1.4.3-rc2

The SHA512 checksum of the tarball can be found at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc2/mesos-1.4.3.tar.gz.sha512

The signature of the tarball can be found at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc2/mesos-1.4.3.tar.gz.asc

The PGP key used to sign the release is here:
https://dist.apache.org/repos/dist/release/mesos/KEYS

The JAR is in a staging repository here:
https://repository.apache.org/content/repositories/orgapachemesos-1245

Please vote on releasing this package as Apache Mesos 1.4.3!

The vote is open until Mon Feb 18 18:27:30 PST 2019 and passes if a
majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Mesos 1.4.3
[ ] -1 Do not release this package because ...

Thanks,
Meng


Re: [VOTE] Release Apache Mesos 1.7.1 (rc2)

2019-01-26 Thread Meng Zhu
+1

sudo make check on CentOS 7.4.

All failed tests are known flaky:

[  FAILED  ] CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs
[  FAILED  ] CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen
[  FAILED  ]
NvidiaGpuTest.ROOT_INTERNET_CURL_CGROUPS_NVIDIA_GPU_NvidiaDockerImage

-Meng

On Fri, Jan 18, 2019 at 2:59 PM Gilbert Song  wrote:

> +1 (binding).
>
> All tests passed except 5 failures (known flakiness) from our internal CI:
>
> FLAG=CMake,label=mesos-ec2-centos-7
>
>  mesos-ec2-centos-7-CMake.Mesos.CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs
>
> FLAG=SSL,label=mesos-ec2-centos-7
>  mesos-ec2-centos-7-SSL.MESOS_TESTS_ABORTED.xml.[empty]
>
> FLAG=SSL,label=mesos-ec2-debian-9
>
>  
> mesos-ec2-debian-9-SSL.Mesos.FetcherCacheTest.CachedCustomOutputFileWithSubdirectory
>
> FLAG=SSL,label=mesos-ec2-ubuntu-16.04
>
>  
> mesos-ec2-ubuntu-16.04-SSL.Mesos.CniIsolatorTest.ROOT_INTERNET_CURL_LaunchCommandTask
>
> FLAG=SSL,label=mesos-ec2-centos-6
>
>  
> mesos-ec2-centos-6-SSL.Mesos.GarbageCollectorIntegrationTest.LongLivedDefaultExecutorRestart
>
> -Gilbert
>
> On Wed, Jan 16, 2019 at 2:24 PM Vinod Kone  wrote:
>
> > +1 (binding)
> >
> > Tested on ASF CI. Failing builds are due to missed SSL dep in the docker
> > build file and a flaky test.
> >
> > *Revision*: d5678c3c5500cec72e22e775d9d048c55c128954
> >
> >- refs/tags/1.7.1-rc2
> >
> > Configuration Matrix gcc clang
> > centos:7 --verbose --disable-libtool-wrappers
> > --disable-parallel-test-execution --enable-libevent --enable-ssl
> autotools
> > [image: Success]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Not run]
> > cmake
> > [image: Success]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Not run]
> > --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> > autotools
> > [image: Success]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Not run]
> > cmake
> > [image: Success]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Not run]
> > ubuntu:16.04 --verbose --disable-libtool-wrappers
> > --disable-parallel-test-execution --enable-libevent --enable-ssl
> autotools
> > [image: Failed]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Failed]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=autotools,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > cmake
> > [image: Failed]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > [image: Failed]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers%20--disable-parallel-test-execution%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A16.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/
> >
> > --verbose --disable-libtool-wrappers --disable-parallel-test-execution
> > autotools
> > [image: Success]
> > <
> https://builds.apache.org/view/M-R/view/Mesos/job/Mesos-Release/59/BUILDTOOL=autotools,COMPILER=gcc,CONFIG

[VOTE] Release Apache Mesos 1.4.3 (rc1)

2019-01-25 Thread Meng Zhu
Hi all,

Please vote on releasing the following candidate as Apache Mesos 1.4.3.

1.4.3 includes the following:

https://issues.apache.org/jira/issues/?filter=12345433

The CHANGELOG for the release is available at:
https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.4.3-rc1


The candidate for Mesos 1.4.3 release is available at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc1/mesos-1.4.3.tar.gz

The tag to be voted on is 1.4.3-rc1:
https://gitbox.apache.org/repos/asf?p=mesos.git;a=commit;h=1.4.3-rc1

The SHA512 checksum of the tarball can be found at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc1/mesos-1.4.3.tar.gz.sha512

The signature of the tarball can be found at:
https://dist.apache.org/repos/dist/dev/mesos/1.4.3-rc1/mesos-1.4.3.tar.gz.asc

The PGP key used to sign the release is here:
https://dist.apache.org/repos/dist/release/mesos/KEYS

The JAR is in a staging repository here:
https://repository.apache.org/content/repositories/orgapachemesos-1244

Please vote on releasing this package as Apache Mesos 1.4.3!

The vote is open until Mon Jan 30th 14:02:55 PST 2019 and passes if a
majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Mesos 1.4.3
[ ] -1 Do not release this package because ...

Thanks,
Meng


Quota 2.0 proposal

2019-01-20 Thread Meng Zhu
Hi folks:

I am excited to propose Quota 2.0 for better resource management on Mesos,
with explicit limits (decoupled from guarantee), generic quota (which can
be set on resources with metadata and on more generic resources such as the
number of containers) and bright shiny new APIs.

You can find the design doc here
.
Please feel free to leave comments and suggestions.

I have also put an agenda item for the upcoming API working group meeting
on Tuesday (Jan 22nd, 11am PST), please join if you are interested.

Thanks,
Meng


Re: [VOTE] Release Apache Mesos 1.7.1 (rc1)

2018-12-28 Thread Meng Zhu
+1
Make check pass on Ubuntu 18.04, clang 6

-Meng

On Fri, Dec 21, 2018 at 2:48 PM Chun-Hung Hsiao  wrote:

> Hi all,
>
> Please vote on releasing the following candidate as Apache Mesos 1.7.1.
>
>
> 1.7.1 includes the following:
>
> 
> * This is a bug fix release. Also includes performance and API
>   improvements:
>
>   * **Allocator**: Improved allocation cycle time substantially
> (see MESOS-9239 and MESOS-9249). These reduce the allocation
> cycle time in some benchmarks by 80%.
>
>   * **Scheduler API**: Improved the experimental `CREATE_DISK` and
> `DESTROY_DISK` operations for CSI volume recovery (see MESOS-9275
> and MESOS-9321). Storage local resource providers now return disk
> resources with the `source.vendor` field set, so frameworks needs to
> upgrade the `Resource` protobuf definitions.
>
>   * **Scheduler API**: Offer operation feedbacks now present their agent
> IDs and resource provider IDs (see MESOS-9293).
>
>
> The CHANGELOG for the release is available at:
>
> https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.7.1-rc1
>
> 
>
> The candidate for Mesos 1.7.1 release is available at:
> https://dist.apache.org/repos/dist/dev/mesos/1.7.1-rc1/mesos-1.7.1.tar.gz
>
> The tag to be voted on is 1.7.1-rc1:
> https://gitbox.apache.org/repos/asf?p=mesos.git;a=commit;h=1.7.1-rc1
>
> The SHA512 checksum of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.7.1-rc1/mesos-1.7.1.tar.gz.sha512
>
> The signature of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.7.1-rc1/mesos-1.7.1.tar.gz.asc
>
> The PGP key used to sign the release is here:
> https://dist.apache.org/repos/dist/release/mesos/KEYS
>
> The JAR is in a staging repository here:
>
> https://repository.apache.org/content/repositories/releases/org/apache/mesos/mesos/1.7.1-rc1/
>
> Please vote on releasing this package as Apache Mesos 1.7.1!
>
> To accommodate for the holidays, the vote is open until Mon Dec 31
> 14:00:00 PST 2018 and passes if a majority of at least 3 +1 PMC votes are
> cast.
>
> [ ] +1 Release this package as Apache Mesos 1.7.1
> [ ] -1 Do not release this package because ...
>
> Thanks,
> Chun-Hung & Gaston
>


Re: New scheduler API proposal: unsuppress and clear_filter

2018-12-10 Thread Meng Zhu
Thanks Ben. Some thoughts below:

>From a scheduler's perspective the difference between the two models is:
>
> (1) expressing "how much more" you need
> (2) expressing an offer "matcher"
>
> So:
>
> (1) covers the middle part of the demand quantity spectrum we currently
> have: unsuppressed -> infinite additional demand, suppressed -> 0
> additional demand, and now also unsuppressed w/ request of X -> X
> additional demand
>

I am not quite sure if the middle ground (expressing "how much more")
is needed. Even with matchers, the framework may still find itself to cycle
through several offers before finding the right resource. Setting
"effective limit"
will surely prolong this process. I guess the motivation here is to avoid
e.g. sending
too much resources to a just-unsuppressed framework that only wants to
launch a small task. I would say the inefficiency of flooding the framework
with offers would be tolerable if the framework rejects most offers in time,
as we are making progress. Even in cases where such limiting is desired
(e.g. the number of frameworks is too large), I think it is more appropirate
to rely on operators to configure the cluster prioirty by e.g. setting
limits,
than to expect individual frameworks to perform such altruistc action to
limit its own offers (while still having pending work).


> (2) is a global filtering mechanism to avoid getting offers in an unusable
> shape
>

Yeah, as you mentioned, I think we all agree that adding global matchers to
filter-out undesired resources is a good direction--which I think is what
matters most here. I think the small difference lies in how should the
framework
communicate the information: whether a more declarative approach or
exposing the global matchers to frameworks directly.


> They both solve inefficiencies we have, and they're complementary: a
> "request" could actually consist of (1) and (2), e.g. "I need an additional
> 10 cpus, 100GB mem, and I want offers to contain [1cpu, 10GB mem]".
>
> I'll schedule a meeting to discuss further. We should also make sure we
> come back to the original problem in this thread around REVIVE retries.
>
> On Mon, Dec 10, 2018 at 11:58 AM Benjamin Bannier <
> benjamin.bann...@mesosphere.io> wrote:
>
> > Hi Ben et al.,
> >
> > I'd expect frameworks to *always* know how to accept or decline offers in
> > general. More involved frameworks might know how to suppress offers. I
> > don't expect that any framework models filters and their associated
> > durations in detail (that's why I called them a Mesos implementation
> > detail) since there is not much benefit to a framework's primary goal of
> > running tasks as quickly as possible.
> >
> > > I couldn't quite tell how you were imagining this would work, but let
> me
> > spell out the two models that I've been considering, and you can tell me
> if
> > one of these matches what you had in mind or if you had a different model
> > in mind:
> >
> > > (1) "Effective limit" or "give me this much more" ...
> >
> > This sounds more like an operator-type than a framework-type API to me.
> > I'd assume that frameworks would not worry about their total limit the
> way
> > an operator would, but instead care about getting resources to run a
> > certain task at a point in time. I could also imagine this being easy to
> > use incorrectly as frameworks would likely need to understand their total
> > limit when issuing the call which could require state or coordination
> among
> > internal framework components (think: multi-purpose frameworks like
> > Marathon or Aurora).
> >
> > > (2) "Matchers" or "give me things that look like this": when a
> scheduler
> > expresses its "request" for a role, it would act as a "matcher" (opposite
> > of filter). When mesos is allocating resources, it only proceeds if
> > (requests.matches(resources) && !filters.filtered(resources)). The open
> > ended aspect here is what a matcher would consist of. Consider a case
> where
> > a matcher is a resource quantity and multiple are allowed; if any matcher
> > matches, the result is a match. This would be equivalent to letting
> > frameworks specify their own --min_allocatable_resources for a role
> (which
> > is something that has been considered). The "matchers" could be more
> > sophisticated: full resource objects just like filters (but global), full
> > resource objects but with quantities for non-scalar resources like ports,
> > etc.
> >
> > I was thinking in this direction, but what you described is more involved
> > than what I had in mind as a possible first attempt. I'd expect that
> > frameworks currently use `REVIVE` as a proxy for `REQUEST_RESOURCES`, not
> > as a way to manage their filter state tracked in the allocator. Assuming
> we
> > have some way to express resource quantities (i.e., MESOS-9314), we
> should
> > be able to improve on `REVIVE` by providing a `REQUEST_RESOURCES` which
> > clears all filters for resource containing the requested resources (or
> all
> > filters i

Re: New scheduler API proposal: unsuppress and clear_filter

2018-12-04 Thread Meng Zhu
Hi Benjamin:

Thanks for the great feedback.

I like the idea of giving frameworks more meaningful and fined grained
control over which filters to remove, especially this is likely to help
adoption. For example, letting the framework send an optional agentID which
instructs Mesos to only clear filters on that agent might help a task
launch with agent constraint.

However, when it comes to framework sent desired resource profiles, we
should give more thoughts. There is always the question that to what degree
do we support the various meta-data in the resource schema. I feel the
current schema is too complex for expressing resource needs, let alone
respecting it in the allocator (even just for the purpose of removing
filters). We probably want to first introduce a more concise format (such
as resourceQuantity) for all purposes of specifying desired resource
profiles (clear filters, quota guarantee, min_allocatable_resources and
etc) and start from there.

I suggest to just add the optional agentID atm and we can always add
support for specifying resource requirements in the future. And since its
semantic is far away from "requesting resources", I suggest keeping the
name of CLEAR(or REMOVE)_FILTERS.

What do you think?

-Meng

On Tue, Dec 4, 2018 at 1:50 AM Benjamin Bannier <
benjamin.bann...@mesosphere.io> wrote:

> Hi Meng,
>
> thanks for the proposal, I agree that the way these two aspects are
> currently entangled is an issue (e.g., for master/allocator performance
> reasons). At the same time, the workflow we currently expect frameworks to
> follow is conceptually not hard to grasp,
>
> (1) If framework has work then
> (i) put framework in unsuppressed state,
> (ii) decline not matching offers with a long filter duration.
> (2) If an offer matches, accept.
> (3) If there is no more work, suppress. GOTO (1).
>
> Here the framework does not need to track its filters across allocation
> cycles (they are an unexposed implementation detail of the hierarchical
> allocator anyway) which e.g., allows metaschedulers like Marathon or Apache
> Aurora to decouple the scheduling of different workloads. A downside of
> this interface is that
>
> * there is little incentive for frameworks to use SUPPRESS in addition to
> filters, and
> * unsupression is all-or-nothing, forcing the master to send potentially
> all unused resources to one framework, even if it is only interested in a
> fraction. This can cause, at least temporal, non-optimal allocation
> behavior.
>
> It seems to me that even though adding UNSUPPRESS and CLEAR_FILTERS would
> give frameworks more control, it would only be a small improvement. In
> above framework workflow we would allow a small improvement if the
> framework knows that a new workload matches a previously running workflow
> (i.e., it can infer that no filters for the resources it is interested in
> is active) so that it can issue UNSUPPRESS instead of CLEAR_FILTERS.
> Incidentally, there seems little local benefit for frameworks to use these
> new calls as they’d mostly help the master and I’d imagine we wouldn’t want
> to imply that clearing filters would unsuppress the framework. This seems
> too little to me, and we run the danger that frameworks would just always
> pair UNSUPPRESS and CLEAR_FILTERS (or keep using REVIVE) to simplify their
> workflow. If we’d model the interface more along framework needs, there
> would be clear benefit which would help adoption.
>
> A more interesting call for me would be REQUEST_RESOURCES. It maps very
> well onto framework needs (e.g., “I want to launch a task requiring these
> resources”), and clearly communicates a requirement to the master so that
> it e.g., doesn’t need to remove all filters for a framework. It also seems
> to fit the allocator model pretty well which doesn’t explicitly expose
> filters. I believe implementing it should not be too hard if we'd restrict
> its semantics to only communicate to the master that a framework _is
> interested in a certain resource_ without promising that the framework
> _will get them in any amount of time_ (i.e., no need to rethink DRF
> fairness semantics in the hierarchical allocator). I also feel that if we
> have REQUEST_RESOURCES we would have some freedom to perform further
> improvements around filters in the master/allocator (e.g., filter
> compatification, work around increasing the default filter duration, …).
>
>
> A possible zeroth implementation for REQUEST_RESOURCES with the
> hierarchical allocator would be to have it remove any filters containing
> the requested resource and likely to unsuppress the framework. A
> REQUEST_RESOURCES call would hold an optional resource and an optional
> AgentID; the case where both are empty would map onto CLEAR_FILTERS.
>
>
> That being said, 

Re: New scheduler API proposal: unsuppress and clear_filter

2018-12-03 Thread Meng Zhu
See my comments inline.

On Mon, Dec 3, 2018 at 5:43 PM Vinod Kone  wrote:

> Thanks Meng for the explanation.
>
> I imagine most frameworks do not remember what stuff they filtered much
> less figure out how previously filtered stuff  can satisfy new operations.
> That sounds complicated!
>

Frameworks do not need to remember what filters they currently have. Only
knowing
the resource profiles of the current vs. the previous operation would help
a lot.
But yeah, even this may be too much complexity.

>
> But I like your example. So a suggestion we could make to frameworks could
> be to use CLEAR_FILTERS when they have new work, e.g., scale up/down, new
> app (they might want to use this even if they aren't suppressed!); and to
> use UNSUPPRESS when they are rescheduling old work?
>

Yeah, these are the general guideline.

I want to echo and reemphasize that CLEAR_FILTERS is orthogonal to
suppression.
Framework should consider clearing filters regardless of suppression.

Ideally, when there is new different work, old irelavent filters should be
cleared. This helps
framework to get more offers and makes the allocator run faster (filter
could take up
bulk of the allocation time when they build up). On the flip side, calling
CLEAR_FILTERS too often
might also have performance implications (esp. if the master/allocator
actors are already stressed).

Thoughts?
>
> On Mon, Dec 3, 2018 at 6:51 PM Meng Zhu  wrote:
>
> > Hi Vinod:
> >
> > Yeah, `CLEAR_FILTERS` sounds good.
> >
> > UNSUPPRESS should be used whenever currently suppressed framework wants
> to
> > resume getting offers after a previous SUPPRESS call.
> >
> > As for `CLEAR_FILTERS`, the short (but not very useful) suggestion is to
> > call it whenever the framework wants to clear all the existing filters.
> >
> > To elaborate it, frameworks decline and accumulate filters when it is
> > trying to satisfy a particular set of requirements/constraints to perform
> > an operation. Once the operation is done and the next operation comes, if
> > the new operation has the same (or strictly more) resource
> > requirements/constraints compared to the last one, then it is more
> > efficient to KEEP the existing filters instead of getting useless offers
> > and rebuild the filters again.
> >
> > On the other hand, if the requirements/constraints are different (i.e.
> some
> > of the previous requirements could be loosened), then it means the
> existing
> > filter no longer make sense. Then it might be a good idea to clear all
> the
> > existing filters to improve the chance of getting more offers.
> >
> > Note, although we introduce `CLEAR_FILTERS` as part of decoupling the
> > `REVIVE` call, its usage should be independent of suppression/revival.
> The
> > decision to clear the filters only depends on whether the existing
> filters
> > make sense for the current operation constraints/requirements.
> >
> > Examples:
> > If a framework first launches a task, then wants to launch a replacement
> > task (because the first task failed), then it should keep the filters
> built
> > up during the first launch. However, if the framework wants to launch a
> > second task with a completely different resource profile, then clearing
> > filters might help to get more (otherwise filtered) offers and hence
> speed
> > up the deployment.
> >
> > -Meng
> >
> > On Mon, Dec 3, 2018 at 12:40 PM Vinod Kone  wrote:
> >
> > > Hi Meng,
> > >
> > > What would be the recommendation for framework authors on when to use
> > > UNSUPPRESS vs CLEAR_FILTER?
> > >
> > > Also, should it CLEAR_FILTERS instead of CLEAR_FILTER?
> > >
> > > On Mon, Dec 3, 2018 at 2:26 PM Meng Zhu  wrote:
> > >
> > >> Hi:
> > >>
> > >> tl;dr: We are proposing to add two new V1 scheduler APIs: unsuppress
> and
> > >> clear_filter in order to decouple the dual-semantics of the current
> > revive
> > >> call.
> > >>
> > >> As pointed out in the Mesos framework scalability guide
> > >> <
> >
> http://mesos.apache.org/documentation/latest/app-framework-development-guide/#multi-scheduler-scalability
> > >,
> > >> utilizing the suppress
> > >> <
> >
> http://mesos.apache.org/documentation/latest/scheduler-http-api/#suppress>
> > >> call is the key to get your cluster to a large number of frameworks
> > >> <
> >
> https://schd.ws/hosted_files/mesoscon18/84/Scaling%20Mesos%20to%20Thousands%20of%20Frameworks.pdf
> > >.
> >

Re: New scheduler API proposal: unsuppress and clear_filter

2018-12-03 Thread Meng Zhu
Hi Vinod:

Yeah, `CLEAR_FILTERS` sounds good.

UNSUPPRESS should be used whenever currently suppressed framework wants to
resume getting offers after a previous SUPPRESS call.

As for `CLEAR_FILTERS`, the short (but not very useful) suggestion is to
call it whenever the framework wants to clear all the existing filters.

To elaborate it, frameworks decline and accumulate filters when it is
trying to satisfy a particular set of requirements/constraints to perform
an operation. Once the operation is done and the next operation comes, if
the new operation has the same (or strictly more) resource
requirements/constraints compared to the last one, then it is more
efficient to KEEP the existing filters instead of getting useless offers
and rebuild the filters again.

On the other hand, if the requirements/constraints are different (i.e. some
of the previous requirements could be loosened), then it means the existing
filter no longer make sense. Then it might be a good idea to clear all the
existing filters to improve the chance of getting more offers.

Note, although we introduce `CLEAR_FILTERS` as part of decoupling the
`REVIVE` call, its usage should be independent of suppression/revival. The
decision to clear the filters only depends on whether the existing filters
make sense for the current operation constraints/requirements.

Examples:
If a framework first launches a task, then wants to launch a replacement
task (because the first task failed), then it should keep the filters built
up during the first launch. However, if the framework wants to launch a
second task with a completely different resource profile, then clearing
filters might help to get more (otherwise filtered) offers and hence speed
up the deployment.

-Meng

On Mon, Dec 3, 2018 at 12:40 PM Vinod Kone  wrote:

> Hi Meng,
>
> What would be the recommendation for framework authors on when to use
> UNSUPPRESS vs CLEAR_FILTER?
>
> Also, should it CLEAR_FILTERS instead of CLEAR_FILTER?
>
> On Mon, Dec 3, 2018 at 2:26 PM Meng Zhu  wrote:
>
>> Hi:
>>
>> tl;dr: We are proposing to add two new V1 scheduler APIs: unsuppress and
>> clear_filter in order to decouple the dual-semantics of the current revive
>> call.
>>
>> As pointed out in the Mesos framework scalability guide
>> <http://mesos.apache.org/documentation/latest/app-framework-development-guide/#multi-scheduler-scalability>,
>> utilizing the suppress
>> <http://mesos.apache.org/documentation/latest/scheduler-http-api/#suppress>
>> call is the key to get your cluster to a large number of frameworks
>> <https://schd.ws/hosted_files/mesoscon18/84/Scaling%20Mesos%20to%20Thousands%20of%20Frameworks.pdf>.
>> In short, when a framework is idling with no intention to launch any tasks,
>> it should suppress to inform the Mesos to stop sending any more offers. And
>> the framework should revive
>> <http://mesos.apache.org/documentation/latest/scheduler-http-api/#revive>
>> when new work arrives. This way, the allocator will skip the framework when
>> performing resource allocations. As a result, thorny issues such as offer
>> starvation and resource fragmentation would be greatly mitigated.
>>
>> That being said. The suppress/revive calls currently are a little bit
>> unwieldy due to MESOS-9028
>> <https://issues.apache.org/jira/browse/MESOS-9028>:
>>
>> The revive call has two semantics. It unsuppresses the framework AND
>> clears all the existing filters. The later makes the revive call
>> non-idempotent. And sometimes users may want to keep the existing filters
>> when reiving which is not possible atm.
>>
>> To decouple the semantics, as suggested in the ticket, we propose to add
>> two new V1 scheduler calls:
>>
>> (1) `UNSUPPRESS` call requests the Mesos to resume sending offers;
>> (2) `CLEAR_FILTER` call will explicitly clear all the existing filters.
>>
>> To make life easier, both calls will return 200 OK (as opposed to 202
>> returned by most existing scheduler calls, including `SUPPRESS` and
>> `REVIVE`).
>>
>> We will keep the revive call and its semantics (i.e. unsupppress AND
>> clear filters) for backward compatibility.
>>
>> Note, the changes are proposed for V1 API only. Thus, once the changes
>> are landed, framework developers are encouraged to move to V1 API to take
>> advantage of the new calls (among many other benefits).
>>
>> Any feedback/comments are welcome.
>>
>> -Meng
>>
>


New scheduler API proposal: unsuppress and clear_filter

2018-12-03 Thread Meng Zhu
Hi:

tl;dr: We are proposing to add two new V1 scheduler APIs: unsuppress and
clear_filter in order to decouple the dual-semantics of the current revive
call.

As pointed out in the Mesos framework scalability guide
,
utilizing the suppress

call is the key to get your cluster to a large number of frameworks
.
In short, when a framework is idling with no intention to launch any tasks,
it should suppress to inform the Mesos to stop sending any more offers. And
the framework should revive

when new work arrives. This way, the allocator will skip the framework when
performing resource allocations. As a result, thorny issues such as offer
starvation and resource fragmentation would be greatly mitigated.

That being said. The suppress/revive calls currently are a little bit
unwieldy due to MESOS-9028
:

The revive call has two semantics. It unsuppresses the framework AND clears
all the existing filters. The later makes the revive call non-idempotent.
And sometimes users may want to keep the existing filters when reiving
which is not possible atm.

To decouple the semantics, as suggested in the ticket, we propose to add
two new V1 scheduler calls:

(1) `UNSUPPRESS` call requests the Mesos to resume sending offers;
(2) `CLEAR_FILTER` call will explicitly clear all the existing filters.

To make life easier, both calls will return 200 OK (as opposed to 202
returned by most existing scheduler calls, including `SUPPRESS` and
`REVIVE`).

We will keep the revive call and its semantics (i.e. unsupppress AND clear
filters) for backward compatibility.

Note, the changes are proposed for V1 API only. Thus, once the changes are
landed, framework developers are encouraged to move to V1 API to take
advantage of the new calls (among many other benefits).

Any feedback/comments are welcome.

-Meng


Re: [VOTE] Release Apache Mesos 1.5.2 (rc2)

2018-11-22 Thread Meng Zhu
+1
make check on Ubuntu 18.04

On Wed, Oct 31, 2018 at 4:26 PM Gilbert Song  wrote:

> Hi all,
>
> Please vote on releasing the following candidate as Apache Mesos 1.5.2.
>
> 1.5.2 includes the following:
>
> 
> *Announce major bug fixes here*
>   * [MESOS-3790] - ZooKeeper connection should retry on `EAI_NONAME`.
>   * [MESOS-8128] - Make os::pipe file descriptors O_CLOEXEC.
>   * [MESOS-8418] - mesos-agent high cpu usage because of numerous
> /proc/mounts reads.
>   * [MESOS-8545] -
> AgentAPIStreamingTest.AttachInputToNestedContainerSession is flaky.
>   * [MESOS-8568] - Command checks should always call
> `WAIT_NESTED_CONTAINER` before `REMOVE_NESTED_CONTAINER`.
>   * [MESOS-8620] - Containers stuck in FETCHING possibly due to
> unresponsive server.
>   * [MESOS-8830] - Agent gc on old slave sandboxes could empty persistent
> volume data.
>   * [MESOS-8871] - Agent may fail to recover if the agent dies before
> image store cache checkpointed.
>   * [MESOS-8904] - Master crash when removing quota.
>   * [MESOS-8906] - `UriDiskProfileAdaptor` fails to update profile
> selectors.
>   * [MESOS-8907] - Docker image fetcher fails with HTTP/2.
>   * [MESOS-8917] - Agent leaking file descriptors into forked processes.
>   * [MESOS-8921] - Autotools don't work with newer OpenJDK versions.
>   * [MESOS-8935] - Quota limit "chopping" can lead to cpu-only and
> memory-only offers.
>   * [MESOS-8936] - Implement a Random Sorter for offer allocations.
>   * [MESOS-8942] - Master streaming API does not send (health) check
> updates for tasks.
>   * [MESOS-8945] - Master check failure due to CHECK_SOME(providerId).
>   * [MESOS-8947] - Improve the container preparing logging in
> IOSwitchboard and volume/secret isolator.
>   * [MESOS-8952] - process::await/collect n^2 performance issue.
>   * [MESOS-8963] - Executor crash trying to print container ID.
>   * [MESOS-8978] - Command executor calling setsid breaks the tty support.
>   * [MESOS-8980] - mesos-slave can deadlock with docker pull.
>   * [MESOS-8986] - `slave.available()` in the allocator is expensive and
> drags down allocation performance.
>   * [MESOS-8987] - Master asks agent to shutdown upon auth errors.
>   * [MESOS-9024] - Mesos master segfaults with stack overflow under load.
>   * [MESOS-9049] - Agent GC could unmount a dangling persistent volume
> multiple times.
>   * [MESOS-9116] - Launch nested container session fails due to incorrect
> detection of `mnt` namespace of command executor's task.
>   * [MESOS-9125] - Port mapper CNI plugin might fail with "Resource
> temporarily unavailable".
>   * [MESOS-9127] - Port mapper CNI plugin might deadlock iptables on the
> agent.
>   * [MESOS-9131] - Health checks launching nested containers while a
> container is being destroyed lead to unkillable tasks.
>   * [MESOS-9142] - CNI detach might fail due to missing network config
> file.
>   * [MESOS-9144] - Master authentication handling leads to request
> amplification.
>   * [MESOS-9145] - Master has a fragile burned-in 5s authentication
> timeout.
>   * [MESOS-9146] - Agent has a fragile burn-in 5s authentication timeout.
>   * [MESOS-9147] - Agent and scheduler driver authentication retry backoff
> time could overflow.
>   * [MESOS-9151] - Container stuck at ISOLATING due to FD leak.
>   * [MESOS-9170] - Zookeeper doesn't compile with newer gcc due to format
> error.
>   * [MESOS-9196] - Removing rootfs mounts may fail with EBUSY.
>   * [MESOS-9231] - `docker inspect` may return an unexpected result to
> Docker executor due to a race condition.
>   * [MESOS-9267] - Mesos agent crashes when CNI network is not configured
> but used.
>   * [MESOS-9279] - Docker Containerizer 'usage' call might be expensive if
> mount table is big.
>   * [MESOS-9283] - Docker containerizer actor can get backlogged with
> large number of containers.
>   * [MESOS-9305] - Create cgoup recursively to workaround systemd deleting
> cgroups_root.
>   * [MESOS-9308] - URI disk profile adaptor could deadlock.
>   * [MESOS-9334] - Container stuck at ISOLATING state due to libevent poll
> never returns.
>
> The CHANGELOG for the release is available at:
>
> https://gitbox.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=1.5.2-rc2
>
> 
>
> The candidate for Mesos 1.5.2 release is available at:
> https://dist.apache.org/repos/dist/dev/mesos/1.5.2-rc2/mesos-1.5.2.tar.gz
>
> The tag to be voted on is 1.5.2-rc2:
> https://gitbox.apache.org/repos/asf?p=mesos.git;a=commit;h=1.5.2-rc2
>
> The SHA512 checksum of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.5.2-rc2/mesos-1.5.2.tar.gz.sha512
>
> The signature of the tarball can be found at:
>
> https://dist.apache.org/repos/dist/dev/mesos/1.5.2-rc2/mesos-1.5.2.tar.gz.asc
>
> The PGP key used to sign the release is here:
> https://dist.

Proposing Minimum Capability to Safeguard Downgrade

2018-06-14 Thread Meng Zhu
Hi:

A common use case for downgrade is rolling back from problematic upgrades.
Mesos promises compatibility between any 1.x and 1.y versions of
masters/agents as long as new features are not used. However, currently
there is no easy way to tell whether any “new” features are being used. And
any incompatible downgrade would silently result in undefined behavior
instead of failsafe. This is not ideal.

We want to help operators to make informed downgrade decisions and to take
correct actions (e.g. deactivate the use of certain new features) if
necessary. To this end, we propose adding minimum component capability.
Please checkout the doc below for more details. Feel free to comment in the
doc! Thanks!

JIRA: *MESOS-8878 *
Design proposal


-Meng


Proposing change to the allocatable check in the allocator

2018-06-11 Thread Meng Zhu
Hi:

The allocatable

 check in the allocator (shown below) was originally introduced to

help alleviate the situation where a framework receives some resources, but
no

cpu/memory, thus cannot launch a task.


constexpr double MIN_CPUS = 0.01;constexpr Bytes MIN_MEM = Megabytes(32);
bool HierarchicalAllocatorProcess::allocatable(
const Resources& resources)
{
  Option cpus = resources.cpus();
  Option mem = resources.mem();

  return (cpus.isSome() && cpus.get() >= MIN_CPUS) ||
 (mem.isSome() && mem.get() >= MIN_MEM);
}


Issues

However, there has been a couple of issues surfacing lately surrounding the
check.

   -
   - - MESOS-8935 Quota limit "chopping" can lead to cpu-only and
   memory-only offers.

We introduced fined-grained quota-allocation (MESOS-7099) in Mesos 1.5.
When we

allocate resources to a role, we'll "chop" the available resources of the
agent up to the

quota limit for the role. However, this has the unintended consequence of
creating

cpu-only and memory-only offers, even though there might be other agents
with both

cpu and memory resources available in the cluster.


- MESOS-8626 The 'allocatable' check in the allocator is problematic with
multi-role frameworks.

Consider roleA reserved cpu/memory on an agent and roleB reserved disk on
the same agent.

A framework under both roleA and roleB will not be able to get the reserved
disk due to the

allocatable check. With the introduction of resource providers, the similar
situation will

become more common.

Proposed change

Instead of hardcoding a one-size-fits-all value in Mesos, we are proposing
to add a new master flag

min_allocatable_resources. It specifies one or more scalar resources
quantities that define the

minimum allocatable resources for the allocator. The allocator will only
offer resources that are more

than at least one of the specified resources.  The default behavior *is
backward compatible* i.e.

by default, the flag is set to “cpus:0.01|mem:32”.

Usage

The flag takes in either a simple text of resource(s) delimited by a bar
(|) or a JSON array of JSON

formatted resources. Note, the input should be “pure” scalar quantities
i.e. the specified resource(s)

should only have name, type (set to scalar) and scalar fields set.


Examples:

   - - To eliminate cpu or memory only offer due to the quota chopping,
   - we could set the flag to “cpus:0.01;mem:32”
   -
   - - To enable offering disk only offer, we could set the flag to
   “disk:32”
   -
   - - For both, we could set the flag to “cpus:0.01;mem:32|disk:32”.
   - Then the allocator will only offer resources that at least contain
   “cpus:0.01;mem:32”
   - OR resources that at least contain “disk:32”.


Let me know what you think! Thanks!


-Meng


Re: Welcome Chun-Hung Hsiao as Mesos Committer and PMC Member

2018-03-12 Thread Meng Zhu
Congrats Chun! Well deserved!

On Mon, Mar 12, 2018 at 10:09 AM, Zhitao Li  wrote:

> Congrats, Chun!
>
> On Sun, Mar 11, 2018 at 11:47 PM, Gilbert Song 
> wrote:
>
> > Congrats, Chun!
> >
> > It is great to have you in the community!
> >
> > - Gilbert
> >
> > On Sun, Mar 11, 2018 at 4:40 PM, Andrew Schwartzmeyer <
> > and...@schwartzmeyer.com> wrote:
> >
> > > Congratulations Chun!
> > >
> > > I apologize for not also giving you a +1, as I certainly would have,
> but
> > > just discovered my mailing list isn't working. Just a heads up, don't
> let
> > > that happen to you too!
> > >
> > > I look forward to continuing to work with you.
> > >
> > > Cheers,
> > >
> > > Andy
> > >
> > >
> > > On 03/10/2018 9:14 pm, Jie Yu wrote:
> > >
> > >> Hi,
> > >>
> > >> I am happy to announce that the PMC has voted Chun-Hung Hsiao as a new
> > >> committer and member of PMC for the Apache Mesos project. Please join
> me
> > >> to
> > >> congratulate him!
> > >>
> > >> Chun has been an active contributor for the past year. His main
> > >> contributions to the project include:
> > >> * Designed and implemented gRPC client support to libprocess
> > (MESOS-7749)
> > >> * Designed and implemented Storage Local Resource Provider
> (MESOS-7235,
> > >> MESOS-8374)
> > >> * Implemented part of the CSI support (MESOS-7235, MESOS-8374)
> > >>
> > >> Chun is friendly and humble, but also intelligent, insightful, and
> > >> opinionated. I am confident that he will be a great addition to our
> > >> committer pool. Thanks Chun for all your contributions to the project
> so
> > >> far!
> > >>
> > >> His committer checklist can be found here:
> > >> https://docs.google.com/document/d/1FjroAvjGa5NdP29zM7-2eg6t
> > >> LPAzQRMUmCorytdEI_U/edit?usp=sharing
> > >>
> > >> - Jie
> > >>
> > >
> > >
> >
>
>
>
> --
> Cheers,
>
> Zhitao Li
>


Re: Tasks may be explicitly dropped by agent in Mesos 1.5

2018-03-02 Thread Meng Zhu
CORRECTION:

This is a new behavior that only appears in the current 1.5.x branch. In
1.5.0, Mesos
agent still has the old behavior, namely, any reordered tasks (to the same
executor)
are launched regardless.

On Fri, Mar 2, 2018 at 9:41 AM, Chun-Hung Hsiao 
wrote:

> Gilbert I think you're right. The code path doesn't exist in 1.5.0.
>
> On Mar 2, 2018 9:36 AM, "Chun-Hung Hsiao"  wrote:
>
> > This is a new behavior we have after solving MESOS-1720, and thus a new
> > problem only in 1.5.x. Prior to 1.5, reordered tasks (to the same
> executor)
> > will be launched because whoever comes first will launch the executor.
> > Since 1.5, one might be dropped.
> >
> > On Mar 1, 2018 4:36 PM, "Gilbert Song"  wrote:
> >
> >> Meng,
> >>
> >> Could you double check if this is really an issue in Mesos 1.5.0
> release?
> >>
> >> MESOS-1720 <https://issues.apache.org/jira/browse/MESOS-1720> was
> >> resolved
> >> after the 1.5 release (rc-2) and it seems like
> >> it is only at the master branch and 1.5.x branch (not 1.5.0).
> >>
> >> Did I miss anything?
> >>
> >> - Gilbert
> >>
> >> On Thu, Mar 1, 2018 at 4:22 PM, Benjamin Mahler 
> >> wrote:
> >>
> >> > Put another way, we currently don't guarantee in-order task delivery
> to
> >> > the executor. Due to the changes for MESOS-1720, one special case of
> >> task
> >> > re-ordering now leads to the re-ordered task being dropped (rather
> than
> >> > delivered out-of-order as before). Technically, this is strictly
> better.
> >> >
> >> > However, we'd like to start guaranteeing in-order task delivery.
> >> >
> >> > On Thu, Mar 1, 2018 at 2:56 PM, Meng Zhu  wrote:
> >> >
> >> >> Hi all:
> >> >>
> >> >> TLDR: In Mesos 1.5, tasks may be explicitly dropped by the agent
> >> >> if all three conditions are met:
> >> >> (1) Several `LAUNCH_TASK` or `LAUNCH_GROUP` calls
> >> >>  use the same executor.
> >> >> (2) The executor currently does not exist on the agent.
> >> >> (3) Due to some race conditions, these tasks are trying to launch
> >> >> on the agent in a different order from their original launch order.
> >> >>
> >> >> In this case, tasks that are trying to launch on the agent
> >> >> before the first task in the original order will be explicitly
> dropped
> >> by
> >> >> the agent (TASK_DROPPED` or `TASK_LOST` will be sent)).
> >> >>
> >> >> This bug will be fixed in 1.5.1. It is tracked in
> >> >> https://issues.apache.org/jira/browse/MESOS-8624
> >> >>
> >> >> 
> >> >>
> >> >> In https://issues.apache.org/jira/browse/MESOS-1720, we introduced
> an
> >> >> ordering dependency between two `LAUNCH`/`LAUNCH_GROUP`
> >> >> calls to a new executor. The master would specify that the first call
> >> is
> >> >> the
> >> >> one to launch a new executor through the `launch_executor` field in
> >> >> `RunTaskMessage`/`RunTaskGroupMessage`, and the second one should
> >> >> use the existing executor launched by the first one.
> >> >>
> >> >> On the agent side, running a task/task group goes through a series of
> >> >> continuations, one is `collect()` on the future that unschedule
> >> >> frameworks from
> >> >> being GC'ed:
> >> >> https://github.com/apache/mesos/blob/master/src/slave/slave.
> cpp#L2158
> >> >> another is `collect()` on task authorization:
> >> >> https://github.com/apache/mesos/blob/master/src/slave/slave.
> cpp#L2333
> >> >> Since these `collect()` calls run on individual actors, the futures
> of
> >> the
> >> >> `collect()` calls for two `LAUNCH`/`LAUNCH_GROUP` calls may return
> >> >> out-of-order, even if the futures these two `collect()` wait for are
> >> >> satisfied in
> >> >> order (which is true in these two cases).
> >> >>
> >> >> As a result, under some race conditions (probably under some heavy
> load
> >> >> conditions), tasks rely on the previous task to launch executor may
> >> >> get processed before the task that is supposed to launch the executor
> >> >> first, resulting in the tasks being explicitly dropped by the agent.
> >> >>
> >> >> -Meng
> >> >>
> >> >>
> >> >>
> >> >
> >>
> >
>


Tasks may be explicitly dropped by agent in Mesos 1.5

2018-03-01 Thread Meng Zhu
Hi all:

TLDR: In Mesos 1.5, tasks may be explicitly dropped by the agent
if all three conditions are met:
(1) Several `LAUNCH_TASK` or `LAUNCH_GROUP` calls
 use the same executor.
(2) The executor currently does not exist on the agent.
(3) Due to some race conditions, these tasks are trying to launch
on the agent in a different order from their original launch order.

In this case, tasks that are trying to launch on the agent
before the first task in the original order will be explicitly dropped by
the agent (TASK_DROPPED` or `TASK_LOST` will be sent)).

This bug will be fixed in 1.5.1. It is tracked in
https://issues.apache.org/jira/browse/MESOS-8624



In https://issues.apache.org/jira/browse/MESOS-1720, we introduced an
ordering dependency between two `LAUNCH`/`LAUNCH_GROUP`
calls to a new executor. The master would specify that the first call is the
one to launch a new executor through the `launch_executor` field in
`RunTaskMessage`/`RunTaskGroupMessage`, and the second one should
use the existing executor launched by the first one.

On the agent side, running a task/task group goes through a series of
continuations, one is `collect()` on the future that unschedule frameworks
from
being GC'ed:
https://github.com/apache/mesos/blob/master/src/slave/slave.cpp#L2158
another is `collect()` on task authorization:
https://github.com/apache/mesos/blob/master/src/slave/slave.cpp#L2333
Since these `collect()` calls run on individual actors, the futures of the
`collect()` calls for two `LAUNCH`/`LAUNCH_GROUP` calls may return
out-of-order, even if the futures these two `collect()` wait for are
satisfied in
order (which is true in these two cases).

As a result, under some race conditions (probably under some heavy load
conditions), tasks rely on the previous task to launch executor may
get processed before the task that is supposed to launch the executor
first, resulting in the tasks being explicitly dropped by the agent.

-Meng


Re: Resource allocation cycle in DRF for multiple frameworks

2017-12-12 Thread Meng Zhu
Hi Bigggyan:

Q1 and Q2: Keep in mind that a framework can assert some control over its
share by rejecting resource offers.

Quote from the Mesos NSDI paper:

"To maintain a thin interface and enable frameworks to evolve
independently, Mesos does not require frameworks to specify their resource
requirements or constraints. Instead, Mesos gives frameworks the ability to
reject offers. A framework can reject resources that do not satisfy its
constraints in order to wait for ones that do. Thus, the rejection
mechanism enables frameworks to support arbitrarily complex resource
constraints while keeping Mesos simple and scalable."

Q3: Allocator needs to implement a set of callback functions, please take a
look at the base class in `include/mesos/allocator/allocator.hpp`.

Hope that helps.

-Meng

On Tue, Dec 12, 2017 at 10:28 AM, bigggyan  wrote:

> Hi Benjamin,
> Thanks for the detailed explanation. It took little time for me to
> understand the implementation. I would like to ask few questions regarding
> DRF  implementation which I could not figure out from the source code.
>
> Q1: DRF paper talks about the demand vector of each user and allocation
> modules goal is to satisfy the user's demand in a fair share way. I could
> not find how current allocation module is taking care of the demand of each
> users. It looks like some time frameworks are getting offers more than they
> required as allocation module can not see how much the user actually
> required.
> I tried setting demand vector in the framework at the end of the each
> "resourceOffer" call back but could not see any difference. Is it meant to
> be like this? Is allocation module is ignoring the demands knowingly?
> please guide.
>
> Q2: during allocation  "randomly picked agent is assigned to DRF_sorted
> framework"  may assign agents to a framework where that framework does not
> need the available offered resources. Say it received an offer with zero
> CPU but huge amount disk, though it required primarily CPU. Even though it
> does not use the disk resource but its share can go up for the disk
> assigned to it. So the framework may miss the next offer due to the high
> share of the disk.
>
> Why not pick a DRF_sorted framework first and check which agent can best
> fulfill its demands. Do you think in a huge production env checking each
> agent to best fit can cause a significant delay?
>
> Q3: documentation says we can add custom allocation module for custom
> needs. I was curious to know how easy it will be to tweak the code and see
> if it can make any improvement to a small cluster with a custom framework.
>
> Appreciate your help and thanks a lot.
>
> Thanks
>
> On Tue, Dec 5, 2017 at 9:28 PM, Benjamin Mahler 
> wrote:
>
>> Q1: we randomly sort the agents, so the pseudo-code I showed is:
>>
>> - for each agent:
>> + for each agent in random_sort(agents):
>>
>> Q2: It depends on which version you're running. We used to immediately
>> re-offer, but this was problematic since it kept going back to the same
>> framework when using a low timeout. Now, the current implementation won't
>> immediately re-offer it in an attempt to let it go to another framework
>> during the next allocation "cycle":
>>
>> https://github.com/apache/mesos/blob/1.4.0/src/master/alloca
>> tor/mesos/hierarchical.cpp#L1202-L1213
>>
>> Q3: We implement best-effort DRF to improve utilization. That is, we let
>> a role go above its fair share if the lower share roles do not want the
>> resources, and a role may have to wait for the resources to be released
>> before it can get its fair share (since we cannot revoke resources). So, we
>> increase utilization at the cost of no longer providing a guarantee that a
>> role can get its fair share without waiting! In the future, we will use
>> revocation to ensure a user is guaranteed to get their fair share without
>> having to wait.
>>
>> On Tue, Dec 5, 2017 at 9:04 AM, bigggyan  wrote:
>>
>>> Hi Benjamin,
>>> Thanks for the clear explanation. This loop structure makes it clear to
>>> understand how resource allocation is actually happening inside mesos
>>> master allocation module. However I have few quires. I will try to ask
>>> questions to clarify them. My goal is to understand how DRF is implemented
>>> in Apache Mesos based on the DRF paper. I am doing this for an academic
>>> project to develop a custom framework.
>>> I am using few in-house frameworks along with Mesosphere Marathon and
>>> Chronos. I am using default role and no weigh to any frameworks and
>>> constraint. so  the loop becomes simpler.
>>>
>>> I understand that there exists no such cycle, but what I meant was the
>>> end of the outer loop when all the agents are allocated to frameworks.
>>>
>>> Q1: the loop "for each agent" : how one agent is being picked over
>>> other agents, to be assigned to a framework?
>>> Q2: now after all the agents are allocated to available frameworks, each
>>> framework can decide whether to use it or not. So the question is: