Re: Build failed in Jenkins: Mesos » Mesos-Tidybot » -DENABLE_LIBEVENT=ON -DENABLE_SSL=ON,ubuntu #168

2024-03-14 Thread Benjamin Mahler
+devin

The new tidy image is working! Can you have a look at these errors? They
are related to the bpf / cgroups2 code.

On Thu, Mar 14, 2024 at 5:55 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=ON%20-DENABLE_SSL=ON,label_exp=ubuntu/168/display/redirect?page=changes
> >
>
> Changes:
>
> [Benjamin Mahler] [mesos-tidy] Fix broken Dockerfile due to llvm repo move.
>
>
> --
> [...truncated 571.48 KB...]
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.81.0/src/boost-1.81.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/3rdparty/rapidjson-1.1.0/src/rapidjson-1.1.0/include
> -I/usr/include/subversion-1
> -I/BUILD/3rdparty/grpc-1.11.1/src/grpc-1.11.1/include -isystem
> /usr/include/apr-1.0 -isystem
> /BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/include -isystem
> /BUILD/3rdparty/libarchive-3.3.2/src/libarchive-3.3.2-build/include
> -isystem /BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0/src -isystem
> /BUILD/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2 -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googlemock/include
> -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googletest/include
> -O3 -DNDEBUG -fPIE   -Wall -Wsign-compare -Wformat-security
> -fstack-protector-strong -Wno-unused-local-typedefs -std=c++11 -o
> CMakeFiles/libprocess-tests.dir/time_tests.cpp.o -c
> /tmp/SRC/3rdparty/libprocess/src/tests/time_tests.cpp
> [100%] Building CXX object
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/timeseries_tests.cpp.o
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.81.0/src/boost-1.81.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/3rdparty/rapidjson-1.1.0/src/rapidjson-1.1.0/include
> -I/usr/include/subversion-1
> -I/BUILD/3rdparty/grpc-1.11.1/src/grpc-1.11.1/include -isystem
> /usr/include/apr-1.0 -isystem
> /BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/include -isystem
> /BUILD/3rdparty/libarchive-3.3.2/src/libarchive-3.3.2-build/include
> -isystem /BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0/src -isystem
> /BUILD/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2 -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googlemock/include
> -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googletest/include
> -O3 -DNDEBUG -fPIE   -Wall -Wsign-compare -Wformat-security
> -fstack-protector-strong -Wno-unused-local-typedefs -std=c++11 -o
> CMakeFiles/libprocess-tests.dir/timeseries_tests.cpp.o -c
> /tmp/SRC/3rdparty/libprocess/src/tests/timeseries_tests.cpp
> [100%] Building CXX object
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/reap_tests.cpp.o
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.81.0/src/boost-1.81.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/3rdparty/rapidjson-1.1.0/src/rapidjson-1.1.0/include
> -I/usr/include/subversion

Re: Build failed in Jenkins: Mesos » Mesos-Reviewbot #30672

2024-03-13 Thread Benjamin Mahler
+dleamy

I cleaned up dleamy's reviews that had bad dependencies, I believe these
failures should stop now.

On Wed, Mar 13, 2024 at 6:14 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Reviewbot/30672/display/redirect
> >
>
> Changes:
>
>
> --
> [...truncated 33.15 KB...]
> Dependent review: https://reviews.apache.org/api/review-requests/71303/
> Latest diff timestamp: 2019-08-28 04:58:52
> Latest dependency change timestamp: 2019-08-28 05:00:26
> Dependent review: https://reviews.apache.org/api/review-requests/71379/
> Latest diff timestamp: 2019-08-28 04:58:44
> Checking if review: 71376 needs verification
> Latest review timestamp: 2019-08-28 01:27:27
> Latest diff timestamp: 2019-08-27 20:33:43
> Checking if review: 70335 needs verification
> Skipping blocking review 70335
> Checking if review: 71417 needs verification
> Latest review timestamp: 2019-08-30 21:15:54
> Latest diff timestamp: 2019-08-30 20:57:35
> Dependent review: https://reviews.apache.org/api/review-requests/70335/
> Latest diff timestamp: 2019-08-30 20:57:23
> Checking if review: 71422 needs verification
> Skipping blocking review 71422
> Checking if review: 71391 needs verification
> Skipping blocking review 71391
> Checking if review: 71423 needs verification
> Latest review timestamp: 2019-09-02 13:06:51
> Latest diff timestamp: 2019-09-02 09:56:23
> Dependent review: https://reviews.apache.org/api/review-requests/71422/
> Latest diff timestamp: 2019-09-02 09:52:49
> Checking if review: 71384 needs verification
> Skipping blocking review 71384
> Checking if review: 71385 needs verification
> Latest review timestamp: 2019-09-04 21:01:06
> Latest diff timestamp: 2019-09-04 12:40:19
> Dependent review: https://reviews.apache.org/api/review-requests/71384/
> Latest diff timestamp: 2019-08-28 09:10:48
> Dependent review: https://reviews.apache.org/api/review-requests/71383/
> Latest diff timestamp: 2019-08-28 09:10:38
> Dependent review: https://reviews.apache.org/api/review-requests/71382/
> Latest diff timestamp: 2019-08-28 09:10:15
> Checking if review: 71390 needs verification
> Skipping blocking review 71390
> Checking if review: 71392 needs verification
> Skipping blocking review 71392
> Checking if review: 71393 needs verification
> Skipping blocking review 71393
> Checking if review: 71394 needs verification
> Skipping blocking review 71394
> Checking if review: 71395 needs verification
> Latest review timestamp: 2019-08-28 21:37:05
> Latest diff timestamp: 2019-08-28 16:40:42
> Dependent review: https://reviews.apache.org/api/review-requests/71394/
> Latest diff timestamp: 2019-08-28 16:40:07
> Dependent review: https://reviews.apache.org/api/review-requests/71393/
> Latest diff timestamp: 2019-08-28 16:39:26
> Dependent review: https://reviews.apache.org/api/review-requests/71392/
> Latest diff timestamp: 2019-08-28 16:39:04
> Dependent review: https://reviews.apache.org/api/review-requests/71391/
> Latest diff timestamp: 2019-08-28 16:38:32
> Dependent review: https://reviews.apache.org/api/review-requests/71390/
> Latest diff timestamp: 2019-08-28 16:31:58
> Checking if review: 71451 needs verification
> Skipping blocking review 71451
> Checking if review: 71450 needs verification
> Skipping blocking review 71450
> Checking if review: 71452 needs verification
> Latest review timestamp: 2019-09-09 11:18:47
> Latest diff timestamp: 2019-09-09 10:56:48
> Dependent review: https://reviews.apache.org/api/review-requests/71451/
> Latest diff timestamp: 2019-09-09 10:56:33
> Dependent review: https://reviews.apache.org/api/review-requests/71450/
> Latest diff timestamp: 2019-09-09 10:56:21
> Checking if review: 71459 needs verification
> Skipping blocking review 71459
> Checking if review: 69411 needs verification
> Latest review timestamp: 2019-09-09 22:37:16
> Latest diff timestamp: 2019-09-09 16:35:28
> Latest dependency change timestamp: 2018-12-18 18:43:36
> Checking if review: 71494 needs verification
> Latest review timestamp: 2019-09-17 09:45:41
> Latest diff timestamp: 2019-09-17 07:43:33
> Checking if review: 71179 needs verification
> Latest review timestamp: 2019-09-18 15:57:05
> Latest diff timestamp: 2019-07-29 16:17:18
> Dependent review: https://reviews.apache.org/api/review-requests/71178/
> Latest diff timestamp: 2019-09-18 12:55:36
> Checking if review: 71510 needs verification
> Skipping blocking review 71510
> Checking if review: 71511 needs verification
> Skipping blocking review 71511
> Checking if review: 71512 needs verification
> Latest review timestamp: 2019-09-19 22:39:11
> Latest diff timestamp: 2019-09-19 18:28:21
> Dependent review: https://reviews.apache.org/api/review-requests/71511/
> Latest diff timestamp: 2019-09-19 18:28:16
> Dependent review: https://reviews.apache.org/api/review-requests/71510/
> Latest diff timestamp: 2019-09-19 18:28:11
> Checking if review: 71441 needs 

Re: Build failed in Jenkins: Mesos » Mesos-Tidybot » -DENABLE_LIBEVENT=OFF -DENABLE_SSL=OFF,ubuntu #143

2024-03-13 Thread Benjamin Mahler
Fixed the image here: https://github.com/apache/mesos/pull/516
Pinged michael to get access to the dockerhub repository so that I can
upload a new one. I can then update our readme accordingly.

On Tue, Feb 20, 2024 at 2:24 PM Benjamin Mahler  wrote:

> +cf-natali
>
> Looks like we need to upload a new docker image for tidy given the boost
> upgrade, similar to what you did here:
> https://github.com/apache/mesos/pull/451
> However, the docker image build fails for me, *and* I'm not able to login
> to docker hub at the moment due to an error (I've filed a support request
> with them).
>
> docker build -t mesos-tidy .
> ...
> 8.344 fatal: unable to access 'http://git.llvm.org/git/llvm/': Maximum
> (20) redirects followed
> ...
> ERROR: failed to solve: process "/bin/sh -c git clone --depth 1 -b
> release_90 http://llvm.org/git/llvm /tmp/llvm &&   git clone --depth 1 -b
> mesos_90 http://github.com/mesos/clang.git /tmp/llvm/tools/clang &&   git
> clone --depth 1 -b mesos_90 http://github.com/mesos/clang-tools-extra.git
> /tmp/llvm/tools/clang/tools/extra && cmake /tmp/llvm
> -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt &&   cmake --build
> tools/clang/lib/Headers --target install -- -j $(nproc) &&   cmake --build
> tools/clang/tools/extra/clang-tidy --target install -- -j $(nproc) &&
> cd / &&   rm -rf /tmp/llvm &&   rm -rf /tmp/build" did not complete
> successfully: exit code: 128
>
> Charles, are you able to build and upload a new tidy image?
>
> On Mon, Feb 19, 2024 at 11:25 PM Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <
>> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/143/display/redirect?page=changes
>> >
>>
>> Changes:
>>
>> [Benjamin Mahler] [cgroups2] Add cgroups2.hpp and cgroups2.cpp to Mesos
>> build.
>>
>>
>> --
>> Started by upstream project "Mesos/Mesos-Tidybot" build number 143
>> originally caused by:
>>  Started by an SCM change
>> Running as SYSTEM
>> [EnvInject] - Loading node environment variables.
>> Building remotely on builds59 (ubuntu) in workspace <
>> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/ws/
>> >
>> The recommended git tool is: git
>> No credentials specified
>> Wiping out workspace first.
>> Cloning the remote Git repository
>> Cloning repository https://github.com/apache/mesos.git
>>  > git init <
>> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/ws/>
>> # timeout=10
>> Fetching upstream changes from https://github.com/apache/mesos.git
>>  > git --version # timeout=10
>>  > git --version # 'git version 2.34.1'
>>  > git fetch --tags --force --progress --
>> https://github.com/apache/mesos.git +refs/heads/*:refs/remotes/origin/*
>> # timeout=10
>>  > git config remote.origin.url https://github.com/apache/mesos.git #
>> timeout=10
>>  > git config --add remote.origin.fetch
>> +refs/heads/*:refs/remotes/origin/* # timeout=10
>> Avoid second fetch
>> Checking out Revision bdf6cc199e72715243e9aeb2a2c2ce29793fb71d
>> (origin/master)
>>  > git config core.sparsecheckout # timeout=10
>>  > git checkout -f bdf6cc199e72715243e9aeb2a2c2ce29793fb71d # timeout=10
>> Commit message: "[cgroups2] Add cgroups2.hpp and cgroups2.cpp to Mesos
>> build."
>>  > git rev-list --no-walk ac352a7f295c80f401f89e5dc02caeb89dfbbb5d #
>> timeout=10
>> [ubuntu] $ /bin/sh -xe /tmp/jenkins867638908279689147.sh
>> + ./support/jenkins/tidybot.sh
>> Using default tag: latest
>> latest: Pulling from mesos/mesos-tidy
>> f7277927d38a: Pulling fs layer
>> 8d3eac894db4: Pulling fs layer
>> edf72af6d627: Pulling fs layer
>> 3e4f86211d23: Pulling fs layer
>> bb1f56e258b2: Pulling fs layer
>> 8a38d45c9398: Pulling fs layer
>> e606498c69d1: Pulling fs layer
>> ec0d66f1d695: Pulling fs layer
>> 34eb64f4e057: Pulling fs layer
>> 7fd173d75121: Pulling fs layer
>> 366ff4cb255c: Pulling fs layer
>> 92300132d70e: Pulling fs layer
>> eed7f1cb60ec: Pulling fs layer
>> e47acaaca3bf: Pulling fs layer
>> 6e2bb27922fd: Pulling fs layer
>> 80f6677dda2b: Pulling fs layer
>> 34eb64f4e057: Waiting
>> bb1f56e258b2: Waiting
>> 7fd173d75121: Waiting
>> 8a38d45c9398: Wa

Re: Build failed in Jenkins: Mesos » Mesos-Tidybot » -DENABLE_LIBEVENT=OFF -DENABLE_SSL=OFF,ubuntu #143

2024-02-20 Thread Benjamin Mahler
+cf-natali

Looks like we need to upload a new docker image for tidy given the boost
upgrade, similar to what you did here:
https://github.com/apache/mesos/pull/451
However, the docker image build fails for me, *and* I'm not able to login
to docker hub at the moment due to an error (I've filed a support request
with them).

docker build -t mesos-tidy .
...
8.344 fatal: unable to access 'http://git.llvm.org/git/llvm/': Maximum (20)
redirects followed
...
ERROR: failed to solve: process "/bin/sh -c git clone --depth 1 -b
release_90 http://llvm.org/git/llvm /tmp/llvm &&   git clone --depth 1 -b
mesos_90 http://github.com/mesos/clang.git /tmp/llvm/tools/clang &&   git
clone --depth 1 -b mesos_90 http://github.com/mesos/clang-tools-extra.git
/tmp/llvm/tools/clang/tools/extra && cmake /tmp/llvm
-DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt &&   cmake --build
tools/clang/lib/Headers --target install -- -j $(nproc) &&   cmake --build
tools/clang/tools/extra/clang-tidy --target install -- -j $(nproc) &&
cd / &&   rm -rf /tmp/llvm &&   rm -rf /tmp/build" did not complete
successfully: exit code: 128

Charles, are you able to build and upload a new tidy image?

On Mon, Feb 19, 2024 at 11:25 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/143/display/redirect?page=changes
> >
>
> Changes:
>
> [Benjamin Mahler] [cgroups2] Add cgroups2.hpp and cgroups2.cpp to Mesos
> build.
>
>
> --
> Started by upstream project "Mesos/Mesos-Tidybot" build number 143
> originally caused by:
>  Started by an SCM change
> Running as SYSTEM
> [EnvInject] - Loading node environment variables.
> Building remotely on builds59 (ubuntu) in workspace <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/ws/
> >
> The recommended git tool is: git
> No credentials specified
> Wiping out workspace first.
> Cloning the remote Git repository
> Cloning repository https://github.com/apache/mesos.git
>  > git init <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=OFF%20-DENABLE_SSL=OFF,label_exp=ubuntu/ws/>
> # timeout=10
> Fetching upstream changes from https://github.com/apache/mesos.git
>  > git --version # timeout=10
>  > git --version # 'git version 2.34.1'
>  > git fetch --tags --force --progress --
> https://github.com/apache/mesos.git +refs/heads/*:refs/remotes/origin/* #
> timeout=10
>  > git config remote.origin.url https://github.com/apache/mesos.git #
> timeout=10
>  > git config --add remote.origin.fetch
> +refs/heads/*:refs/remotes/origin/* # timeout=10
> Avoid second fetch
> Checking out Revision bdf6cc199e72715243e9aeb2a2c2ce29793fb71d
> (origin/master)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f bdf6cc199e72715243e9aeb2a2c2ce29793fb71d # timeout=10
> Commit message: "[cgroups2] Add cgroups2.hpp and cgroups2.cpp to Mesos
> build."
>  > git rev-list --no-walk ac352a7f295c80f401f89e5dc02caeb89dfbbb5d #
> timeout=10
> [ubuntu] $ /bin/sh -xe /tmp/jenkins867638908279689147.sh
> + ./support/jenkins/tidybot.sh
> Using default tag: latest
> latest: Pulling from mesos/mesos-tidy
> f7277927d38a: Pulling fs layer
> 8d3eac894db4: Pulling fs layer
> edf72af6d627: Pulling fs layer
> 3e4f86211d23: Pulling fs layer
> bb1f56e258b2: Pulling fs layer
> 8a38d45c9398: Pulling fs layer
> e606498c69d1: Pulling fs layer
> ec0d66f1d695: Pulling fs layer
> 34eb64f4e057: Pulling fs layer
> 7fd173d75121: Pulling fs layer
> 366ff4cb255c: Pulling fs layer
> 92300132d70e: Pulling fs layer
> eed7f1cb60ec: Pulling fs layer
> e47acaaca3bf: Pulling fs layer
> 6e2bb27922fd: Pulling fs layer
> 80f6677dda2b: Pulling fs layer
> 34eb64f4e057: Waiting
> bb1f56e258b2: Waiting
> 7fd173d75121: Waiting
> 8a38d45c9398: Waiting
> 366ff4cb255c: Waiting
> 92300132d70e: Waiting
> e606498c69d1: Waiting
> eed7f1cb60ec: Waiting
> ec0d66f1d695: Waiting
> e47acaaca3bf: Waiting
> 80f6677dda2b: Waiting
> 6e2bb27922fd: Waiting
> 3e4f86211d23: Waiting
> 8d3eac894db4: Verifying Checksum
> 8d3eac894db4: Download complete
> edf72af6d627: Download complete
> 3e4f86211d23: Verifying Checksum
> 3e4f86211d23: Download complete
> bb1f56e258b2: Download complete
> f7277927d38a: Verifying Checksum
> f7277927d38a: Download complete
> ec0d66f1d695: Verifying Checksum
> ec0d66f1d695: Download complete
> 34eb64f4e057: Download complete
> 7fd173d75121: Verify

Re: Build failed in Jenkins: Mesos » Mesos-Tidybot » -DENABLE_LIBEVENT=ON -DENABLE_SSL=ON,ubuntu #119

2024-01-16 Thread Benjamin Mahler
+devin

This will be an easy fix: s/.get()/->/


On Tue, Jan 16, 2024 at 5:43 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://ci-builds.apache.org/job/Mesos/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=ON%20-DENABLE_SSL=ON,label_exp=ubuntu/119/display/redirect?page=changes
> >
>
> Changes:
>
> [Benjamin Mahler] Reduced the number of jemalloc arenas.
>
> [Benjamin Mahler] Added ECN configuration copying from the host to the
> container.
>
> [Benjamin Mahler] Expose new metrics for memory usage in cgroup.
>
>
> --
> [...truncated 536.96 KB...]
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.65.0/src/boost-1.65.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/3rdparty/rapidjson-1.1.0/src/rapidjson-1.1.0/include
> -I/usr/include/subversion-1
> -I/BUILD/3rdparty/grpc-1.11.1/src/grpc-1.11.1/include -isystem
> /usr/include/apr-1.0 -isystem
> /BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/include -isystem
> /BUILD/3rdparty/libarchive-3.3.2/src/libarchive-3.3.2-build/include
> -isystem /BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0/src -isystem
> /BUILD/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2 -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googlemock/include
> -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googletest/include
> -O3 -DNDEBUG -fPIE   -Wall -Wsign-compare -Wformat-security
> -fstack-protector-strong -Wno-unused-local-typedefs -std=c++11 -o
> CMakeFiles/libprocess-tests.dir/decoder_tests.cpp.o -c
> /tmp/SRC/3rdparty/libprocess/src/tests/decoder_tests.cpp
> [ 83%] Building CXX object
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/encoder_tests.cpp.o
> [ 83%] Building CXX object
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/future_tests.cpp.o
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.65.0/src/boost-1.65.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/3rdparty/rapidjson-1.1.0/src/rapidjson-1.1.0/include
> -I/usr/include/subversion-1
> -I/BUILD/3rdparty/grpc-1.11.1/src/grpc-1.11.1/include -isystem
> /usr/include/apr-1.0 -isystem
> /BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/include -isystem
> /BUILD/3rdparty/libarchive-3.3.2/src/libarchive-3.3.2-build/include
> -isystem /BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0/src -isystem
> /BUILD/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2 -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googlemock/include
> -isystem
> /BUILD/3rdparty/googletest-1.8.0/src/googletest-1.8.0/googletest/include
> -O3 -DNDEBUG -fPIE   -Wall -Wsign-compare -Wformat-security
> -fstack-protector-strong -Wno-unused-local-typedefs -std=c++11 -o
> CMakeFiles/libprocess-tests.dir/encoder_tests.cpp.o -c
> /tmp/SRC/3rdparty/libprocess/src/tests/encoder_tests.cpp
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/bin/c++
> -DBUILD_DIR=\"/BUILD/3rdparty/libprocess/src/tests\" -DHAVE_LIBZ
> -DLIBDIR=\"/usr/local/lib\" -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_LIBEVENT=1
> -DUSE_SSL_SOCKET=1 -DVERSION=\"1.12.0\" -D__STDC_FORMAT_MACROS
> -I/BUILD/3rdparty/libprocess/src/tests
> -I/tmp/SRC/3rdparty/libprocess/src/tests/..
> -I/tmp/SRC/3rdparty/libprocess/src/../include
> -I/tmp/SRC/3rdparty/stout/include
> -I/BUILD/3rdparty/boost-1.65.0/src/boost-1.65.0
> -I/BUILD/3rdparty/elfio-3.2/src/elfio-3.2
> -I/BUILD/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/BUILD/

Re: Branch: origin/master Mesos-Buildbot - Build # 6937 - Failure

2020-03-06 Thread Benjamin Mahler
Known failure:

https://issues.apache.org/jira/browse/MESOS-8983

On Fri, Mar 6, 2020 at 6:25 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #6937)
>
> Status: Failure
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/6937/
> to view the results.
>
> All tests passed


Re: Branch: origin/master Mesos-Buildbot - Build # 6674 - Still Failing

2019-08-13 Thread Benjamin Mahler
https://issues.apache.org/jira/browse/MESOS-9939

On Tue, Aug 13, 2019 at 5:07 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #6674)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/6674/
> to view the results.
>
> 1 tests failed.
> FAILED:  PersistentVolumeEndpointsTest.DynamicReservation
>
> Error Message:
> ../../../src/tests/persistent_volume_endpoints_tests.cpp:305
> Value of: Resources(offer.resources()).contains(
> allocatedResources(volume, frameworkInfo.roles(0)))
>   Actual: false
> Expected: true
>
> Stack Trace:
> ../../../src/tests/persistent_volume_endpoints_tests.cpp:305
> Value of: Resources(offer.resources()).contains(
> allocatedResources(volume, frameworkInfo.roles(0)))
>   Actual: false
> Expected: true


Re: Branch: origin/master Mesos-Buildbot - Build # 6547 - Still Failing

2019-06-21 Thread Benjamin Mahler
+Chun-Hung Hsiao 

On Fri, Jun 21, 2019 at 6:43 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #6547)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/6547/
> to view the results.
>
> 4 tests failed.
> FAILED:  PersistentVolumeEndpointsTest.DynamicReservation
>
> Error Message:
> ../../../src/tests/persistent_volume_endpoints_tests.cpp:305
> Value of: Resources(offer.resources()).contains(
> allocatedResources(volume, frameworkInfo.roles(0)))
>   Actual: false
> Expected: true
>
> Stack Trace:
> ../../../src/tests/persistent_volume_endpoints_tests.cpp:305
> Value of: Resources(offer.resources()).contains(
> allocatedResources(volume, frameworkInfo.roles(0)))
>   Actual: false
> Expected: true
>
> FAILED:
> CSIVersion/StorageLocalResourceProviderTest.RetryOperationStatusUpdateAfterRecovery/v1
>
> Error Message:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x80f06b8 master@172.17.0.4:38829,
> @0x7f696c022350 264-byte object  00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 20-61 20-63 6F-6D
> 70-6F D0-C7 EE-07 00-00 00-00 C0-C7 EE-07 00-00 00-00 02-00 00-00 AC-11
> 00-04 ... 20-74 6F-20 61-6E 20-41 00-00 00-00 6E-20 76-61 40-B0 80-07 00-00
> 00-00 30-B0 80-07 00-00 00-00 40-DA 02-6C 69-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 10-01 00-00 00-00 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x80f06b8 master@172.17.0.4:38829,
> @0x7f696c022350 264-byte object  00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 20-61 20-63 6F-6D
> 70-6F D0-C7 EE-07 00-00 00-00 C0-C7 EE-07 00-00 00-00 02-00 00-00 AC-11
> 00-04 ... 20-74 6F-20 61-6E 20-41 00-00 00-00 6E-20 76-61 40-B0 80-07 00-00
> 00-00 30-B0 80-07 00-00 00-00 40-DA 02-6C 69-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 10-01 00-00 00-00 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> FAILED:
> CSIVersion/StorageLocalResourceProviderTest.RetryOperationStatusUpdateAfterRecovery/v0
>
> Error Message:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x75f95b8 master@172.17.0.3:34315,
> @0x7f4f140b8a40 264-byte object <80-02 69-83 4F-7F 00-00 C0-98 09-14 4F-7F
> 00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 01-00 00-00 04-00
> 00-00 70-D3 CD-07 00-00 00-00 60-D3 CD-07 00-00 00-00 02-00 00-00 AC-11
> 00-03 ... 69-64 65-72 54-65 73-74 00-00 00-00 72-79 4F-70 10-F7 5F-07 00-00
> 00-00 00-F7 5F-07 00-00 00-00 60-86 0B-14 4F-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 38-75 2F-34 47-42 2D-30>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x75f95b8 master@172.17.0.3:34315,
> @0x7f4f140b8a40 264-byte object <80-02 69-83 4F-7F 00-00 C0-98 09-14 4F-7F
> 00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 01-00 00-00 04-00
> 00-00 70-D3 CD-07 00-00 00-00 60-D3 CD-07 00-00 00-00 02-00 00-00 AC-11
> 00-03 ... 69-64 65-72 54-65 73-74 00-00 00-00 72-79 4F-70 10-F7 5F-07 00-00
> 00-00 00-F7 5F-07 00-00 00-00 60-86 0B-14 4F-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 38-75 2F-34 47-42 2D-30>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> FAILED:  SlaveTest.AgentFailoverHTTPExecutorUsingResourceProviderResources
>
> Error Message:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:704
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x685c8d0 slave(270)@172.17.0.2:34571,
> @0x7f192c0507c0 32-byte object <88-FD E0-8A 19-7F 00-00 A0-11 05-2C 19-7F
> 00-00 00-00 00-00 00-00 00-00 00-A1 29-05 00-00 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> /tmp/SRC/3rdparty/libprocess/src/../include/process/gmock.hpp:704
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x685c8d0 slave(270)@172.17.0.2:34571,
> @0x7f192c0507c0 32-byte object <88-FD E0-8A 19-7F 00-00 A0-11 05-2C 

Re: Branch: origin/master Mesos-Buildbot - Build # 6546 - Failure

2019-06-21 Thread Benjamin Mahler
+Chun-Hung Hsiao 

On Fri, Jun 21, 2019 at 4:56 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #6546)
>
> Status: Failure
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/6546/
> to view the results.
>
> 2 tests failed.
> FAILED:
> CSIVersion/StorageLocalResourceProviderTest.RetryOperationStatusUpdateAfterRecovery/v0
>
> Error Message:
> ../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x55a74ff816f8 master@172.17.0.4:34047,
> @0x7f88f80612b0 216-byte object <00-AB 87-2F 89-7F 00-00 B8-35 06-B0 88-7F
> 00-00 90-56 F4-4F A7-55 00-00 80-56 F4-4F A7-55 00-00 02-00 00-00 AC-11
> 00-04 00-00 00-00 00-00 00-00 00-00 00-00 FF-84 00-00 01-00 00-00 50-72
> 6F-76 ... 01-00 00-00 62-38 36-62 B0-00 00-00 00-00 00-00 34-00 00-00 00-00
> 00-00 00-C2 06-F8 88-7F 00-00 00-00 00-00 88-7F 00-00 80-D1 12-50 A7-55
> 00-00 70-D1 12-50 A7-55 00-00 E8-FF 06-F8 88-7F 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> ../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x55a74ff816f8 master@172.17.0.4:34047,
> @0x7f88f80612b0 216-byte object <00-AB 87-2F 89-7F 00-00 B8-35 06-B0 88-7F
> 00-00 90-56 F4-4F A7-55 00-00 80-56 F4-4F A7-55 00-00 02-00 00-00 AC-11
> 00-04 00-00 00-00 00-00 00-00 00-00 00-00 FF-84 00-00 01-00 00-00 50-72
> 6F-76 ... 01-00 00-00 62-38 36-62 B0-00 00-00 00-00 00-00 34-00 00-00 00-00
> 00-00 00-C2 06-F8 88-7F 00-00 00-00 00-00 88-7F 00-00 80-D1 12-50 A7-55
> 00-00 70-D1 12-50 A7-55 00-00 E8-FF 06-F8 88-7F 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> FAILED:
> CSIVersion/StorageLocalResourceProviderTest.RetryOperationStatusUpdateAfterRecovery/v1
>
> Error Message:
> ../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x55a74ff816f8 master@172.17.0.4:34047,
> @0x7f88b0116580 216-byte object <00-AB 87-2F 89-7F 00-00 B8-35 06-B0 88-7F
> 00-00 50-6A E8-4F A7-55 00-00 40-6A E8-4F A7-55 00-00 02-00 00-00 AC-11
> 00-04 00-00 00-00 00-00 00-00 00-00 00-00 FF-84 00-00 01-00 00-00 89-7F
> 00-00 ... 01-00 00-00 89-7F 00-00 B0-00 00-00 00-00 00-00 24-00 00-00 00-00
> 00-00 60-66 11-B0 88-7F 00-00 00-00 00-00 00-00 00-00 40-6E EB-4F A7-55
> 00-00 30-6E EB-4F A7-55 00-00 A8-24 11-B0 88-7F 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> ../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x55a74ff816f8 master@172.17.0.4:34047,
> @0x7f88b0116580 216-byte object <00-AB 87-2F 89-7F 00-00 B8-35 06-B0 88-7F
> 00-00 50-6A E8-4F A7-55 00-00 40-6A E8-4F A7-55 00-00 02-00 00-00 AC-11
> 00-04 00-00 00-00 00-00 00-00 00-00 00-00 FF-84 00-00 01-00 00-00 89-7F
> 00-00 ... 01-00 00-00 89-7F 00-00 B0-00 00-00 00-00 00-00 24-00 00-00 00-00
> 00-00 60-66 11-B0 88-7F 00-00 00-00 00-00 00-00 00-00 40-6E EB-4F A7-55
> 00-00 30-6E EB-4F A7-55 00-00 A8-24 11-B0 88-7F 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active


Re: Branch: origin/master Mesos-Buildbot - Build # 6536 - Still Failing

2019-06-18 Thread Benjamin Mahler
+chun

On Mon, Jun 17, 2019 at 9:10 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #6536)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/6536/
> to view the results.
>
> 1 tests failed.
> FAILED:
> CSIVersion/StorageLocalResourceProviderTest.RetryOperationStatusUpdateAfterRecovery/v1
>
> Error Message:
> ../../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x56051a1dc6f8 master@172.17.0.3:41751,
> @0x7f5784034730 264-byte object <58-E7 08-09 58-7F 00-00 30-6A 03-84 57-7F
> 00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 30-00 00-00 00-00
> 00-00 20-97 02-1A 05-56 00-00 10-97 02-1A 05-56 00-00 02-00 00-00 AC-11
> 00-03 ... 80-A2 00-84 57-7F 00-00 00-00 00-00 57-7F 00-00 50-0C B1-19 05-56
> 00-00 40-0C B1-19 05-56 00-00 50-66 03-84 57-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 24-00 00-00 00-00 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active
>
> Stack Trace:
> ../../../3rdparty/libprocess/include/process/gmock.hpp:667
> Mock function called more times than expected - returning default value.
> Function call: filter(@0x56051a1dc6f8 master@172.17.0.3:41751,
> @0x7f5784034730 264-byte object <58-E7 08-09 58-7F 00-00 30-6A 03-84 57-7F
> 00-00 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 30-00 00-00 00-00
> 00-00 20-97 02-1A 05-56 00-00 10-97 02-1A 05-56 00-00 02-00 00-00 AC-11
> 00-03 ... 80-A2 00-84 57-7F 00-00 00-00 00-00 57-7F 00-00 50-0C B1-19 05-56
> 00-00 40-0C B1-19 05-56 00-00 50-66 03-84 57-7F 00-00 CA-03 00-00 00-00
> 00-00 CA-03 00-00 00-00 00-00 24-00 00-00 00-00 00-00>)
>   Returns: false
>  Expected: to be never called
>Actual: called once - over-saturated and active


Re: Build failed in Jenkins: Mesos-Tidybot » -DENABLE_LIBEVENT=ON -DENABLE_SSL=ON,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2) #2091

2019-06-06 Thread Benjamin Mahler
+Andrei Sekretenko 

Looks like there's a tidy check for capturing this in a callback without a
'defer'. While it's technically safe in this case, it's brittle and can
easily become unsafe if the future gets satisfied not by the master actor.

While ideally we send the framework error message synchronously after the
update framework fails, the v0 scheduler driver will not know when the
update completes, and will therefore not be able to wait for it to complete
before sending other messages to the master. So, either way (with or
without) defer, it will send other messages while a framework update is in
progress and those messages may fail if the framework update fails.

Can you add a defer to silence the tidy check?

On Thu, Jun 6, 2019 at 10:51 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos-Tidybot/CMAKE_ARGS=-DENABLE_LIBEVENT=ON%20-DENABLE_SSL=ON,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/2091/display/redirect?page=changes
> >
>
> Changes:
>
> [mzhu] Added `<<` operator for `ResourceLimits`.
>
> [mzhu] Added comments on allocating non-scalar resources in the allocator.
>
> [mzhu] Moved `class ResourceQuantities` to public header.
>
> [mzhu] Added `repeated QuotaConfig` to `QuotaStatus`.
>
> [mzhu] Added more comments regarding `message QuotaConfig`.
>
> --
> [...truncated 498.29 KB...]
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/local/bin/cmake -E
> cmake_link_script CMakeFiles/ssl-client.dir/link.txt --verbose=1
> /usr/bin/c++  -O3 -DNDEBUG   CMakeFiles/ssl-client.dir/ssl_client.cpp.o
> -o ssl-client
> -Wl,-rpath,/BUILD/3rdparty/libprocess/src:/BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/lib:/BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0-build:/BUILD/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build:/BUILD/3rdparty/libevent-2.1.5-beta/src/libevent-2.1.5-beta-build/lib
> ../libprocess.so /usr/lib/x86_64-linux-gnu/libapr-1.so
> ../../../glog-0.4.0/src/glog-0.4.0-install/lib/libglog.so
> ../../../libarchive-3.3.2/src/libarchive-3.3.2-build/lib/libarchive.a
> /usr/lib/x86_64-linux-gnu/libcurl.so
> ../../../protobuf-3.5.0/src/protobuf-3.5.0-build/libprotobuf.so -lpthread
> /usr/lib/x86_64-linux-gnu/libz.so -lrt -ldl
> /usr/lib/x86_64-linux-gnu/libsvn_delta-1.so
> /usr/lib/x86_64-linux-gnu/libsvn_diff-1.so
> /usr/lib/x86_64-linux-gnu/libsvn_subr-1.so
> ../../../http_parser-2.6.2/src/http_parser-2.6.2-build/libhttp_parser.a
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgrpc++.so
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgrpc.so
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgpr.so
> /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/x86_64-linux-gnu/libcrypto.so
> ../../../googletest-1.8.0/src/googletest-1.8.0-build/googlemock/libgmock.a
> ../../../googletest-1.8.0/src/googletest-1.8.0-build/googlemock/gtest/libgtest.a
> -Wl,-rpath-link,/BUILD/3rdparty/libevent-2.1.5-beta/src/libevent-2.1.5-beta-build/lib
>
> make[3]: Leaving directory '/BUILD'
> [ 80%] Built target ssl-client
> [ 80%] Linking CXX executable benchmarks
> cd /BUILD/3rdparty/libprocess/src/tests && /usr/local/bin/cmake -E
> cmake_link_script CMakeFiles/benchmarks.dir/link.txt --verbose=1
> /usr/bin/c++  -O3 -DNDEBUG   CMakeFiles/benchmarks.dir/benchmarks.cpp.o
> CMakeFiles/benchmarks.dir/benchmarks.pb.cc.o  -o benchmarks
> -Wl,-rpath,/BUILD/3rdparty/libprocess/src:/BUILD/3rdparty/glog-0.4.0/src/glog-0.4.0-install/lib:/BUILD/3rdparty/protobuf-3.5.0/src/protobuf-3.5.0-build:/BUILD/3rdparty/grpc-1.10.0/src/grpc-1.10.0-build:/BUILD/3rdparty/libevent-2.1.5-beta/src/libevent-2.1.5-beta-build/lib
> ../libprocess.so /usr/lib/x86_64-linux-gnu/libapr-1.so
> ../../../glog-0.4.0/src/glog-0.4.0-install/lib/libglog.so
> ../../../libarchive-3.3.2/src/libarchive-3.3.2-build/lib/libarchive.a
> /usr/lib/x86_64-linux-gnu/libcurl.so
> ../../../protobuf-3.5.0/src/protobuf-3.5.0-build/libprotobuf.so -lpthread
> /usr/lib/x86_64-linux-gnu/libz.so -lrt -ldl
> /usr/lib/x86_64-linux-gnu/libsvn_delta-1.so
> /usr/lib/x86_64-linux-gnu/libsvn_diff-1.so
> /usr/lib/x86_64-linux-gnu/libsvn_subr-1.so
> ../../../http_parser-2.6.2/src/http_parser-2.6.2-build/libhttp_parser.a
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgrpc++.so
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgrpc.so
> ../../../grpc-1.10.0/src/grpc-1.10.0-build/libgpr.so
> /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/x86_64-linux-gnu/libcrypto.so
> ../../../googletest-1.8.0/src/googletest-1.8.0-build/googlemock/libgmock.a
> ../../../googletest-1.8.0/src/googletest-1.8.0-build/googlemock/gtest/libgtest.a
> -Wl,-rpath-link,/BUILD/3rdparty/libevent-2.1.5-beta/src/libevent-2.1.5-beta-build/lib
>
> make[3]: Leaving directory '/BUILD'
> [ 80%] Built target benchmarks
> /usr/bin/make -f
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/build.make
> 3rdparty/libprocess/src/tests/CMakeFiles/libprocess-tests.dir/depend
> make[3]: Entering directory '/BUILD'
> [ 80%] Generating 

Re: Branch: origin/1.7.x Mesos-Buildbot - Build # 5595 - Still Failing

2018-08-15 Thread Benjamin Mahler
Looks like a docker error?

On Wed, Aug 15, 2018 at 6:22 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/1.7.x
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #5595)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/5595/
> to view the results.


Re: Branch: origin/master Mesos-Buildbot - Build # 5294 - Still Failing

2018-08-15 Thread Benjamin Mahler
Just seeing now that this ticket was filed, my email filter was putting all
builds@ emails into a folder even if sent to me.

On Thu, May 17, 2018 at 1:15 AM, Alex R  wrote:

> This looks new: https://issues.apache.org/jira/browse/MESOS-8930
> Ben, Gilbert, can you please have a look?
>
> On 17 May 2018 at 00:57, Apache Jenkins Server 
> wrote:
>
>> Branch: origin/master
>>
>> The Apache Jenkins build system has built Mesos-Buildbot (build #5294)
>>
>> Status: Still Failing
>>
>> Check console output at https://builds.apache.org/job/
>> Mesos-Buildbot/5294/ to view the results.
>
>
>


Re: Branch: origin/master Mesos-Buildbot - Build # 5511 - Still Failing

2018-07-26 Thread Benjamin Mahler
https://reviews.apache.org/r/68073/

On Thu, Jul 26, 2018 at 4:01 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #5511)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/5511/
> to view the results.


Re: Branch: origin/master Mesos-Buildbot - Build # 5509 - Still Failing

2018-07-26 Thread Benjamin Mahler
Pushed a fix.

On Thu, Jul 26, 2018 at 12:13 PM, Benjamin Mahler 
wrote:

> Another distcheck rapidjson issue:
>
> 17:39:21 ERROR: files left in build directory after distclean:
> 17:39:21 ./3rdparty/._rapidjson-1.1.0
> 17:39:21 make[1]: *** [distcleancheck] Error 1
> 17:39:21 make[1]: Leaving directory `/tmp/SRC/build/mesos-1.7.0/_build'
> 17:39:21 make: *** [distcheck] Error 1
>
> Will take a look
>
> On Thu, Jul 26, 2018 at 12:08 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> Branch: origin/master
>>
>> The Apache Jenkins build system has built Mesos-Buildbot (build #5509)
>>
>> Status: Still Failing
>>
>> Check console output at https://builds.apache.org/job/
>> Mesos-Buildbot/5509/ to view the results.
>
>
>


Re: Branch: origin/master Mesos-Buildbot - Build # 5509 - Still Failing

2018-07-26 Thread Benjamin Mahler
Another distcheck rapidjson issue:

17:39:21 ERROR: files left in build directory after distclean:
17:39:21 ./3rdparty/._rapidjson-1.1.0
17:39:21 make[1]: *** [distcleancheck] Error 1
17:39:21 make[1]: Leaving directory `/tmp/SRC/build/mesos-1.7.0/_build'
17:39:21 make: *** [distcheck] Error 1

Will take a look

On Thu, Jul 26, 2018 at 12:08 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #5509)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/5509/
> to view the results.


Re: Branch: origin/master Mesos-Buildbot - Build # 5505 - Still Failing

2018-07-25 Thread Benjamin Mahler
Rapidjson fix was pushed already but not picked up yet by this run.

On Wed, Jul 25, 2018 at 5:11 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #5505)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/5505/
> to view the results.


Fwd: Branch: origin/master Mesos-Buildbot - Build # 5504 - Still Failing

2018-07-25 Thread Benjamin Mahler
Pushing a fix shortly.

-- Forwarded message --
From: Apache Jenkins Server 
Date: Wed, Jul 25, 2018 at 3:06 PM
Subject: Branch: origin/master Mesos-Buildbot - Build # 5504 - Still Failing
To: builds@mesos.apache.org


Branch: origin/master

The Apache Jenkins build system has built Mesos-Buildbot (build #5504)

Status: Still Failing

Check console output at https://builds.apache.org/job/Mesos-Buildbot/5504/
to view the results.


Re: Branch: origin/master Mesos-Buildbot - Build # 4984 - Still Failing

2018-02-12 Thread Benjamin Mahler
Hm..

*19:08:27* docker: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?.*19:08:27*
See 'docker run --help'.*19:08:27* Cannot connect to the Docker daemon
at unix:///var/run/docker.sock. Is the docker daemon
running?*19:08:27* "docker rmi" requires at least 1
argument(s).*19:08:27* See 'docker rmi --help'.*19:08:27* *19:08:27*
Usage:  docker rmi [OPTIONS] IMAGE [IMAGE...]*19:08:27* *19:08:27*
Remove one or more images*19:08:28* Build step 'Execute shell' marked
build as failure*19:08:28* Finished: FAILURE


On Mon, Feb 12, 2018 at 12:00 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Branch: origin/master
>
> The Apache Jenkins build system has built Mesos-Buildbot (build #4984)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Mesos-Buildbot/4984/
> to view the results.


Re: Build failed in Jenkins: Mesos-Buildbot » cmake,clang,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4458

2017-11-15 Thread Benjamin Mahler
I believe this is the getenv during setenv crash that hasn't been fixed yet.

On Wed, Nov 15, 2017 at 12:24 AM Alex R  wrote:

> +Ben, FYI
>
> On 14 November 2017 at 23:24, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <
>> https://builds.apache.org/job/Mesos-Buildbot/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose%20--disable-libtool-wrappers,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23)/4458/display/redirect?page=changes
>> >
>>
>> Changes:
>>
>> [alexr] Revert "Handled the resource conversion for new operations in
>> master."
>>
>> [yujie.jay] Handled the resource conversion for new operations in master.
>>
>> [Kapil Arya] Updated RC tagging+voting mechanism.
>>
>> --
>> [...truncated 5.06 MB...]
>> 3: I1114 22:23:54.153551 15913 state.cpp:64] Recovering state from
>> '/tmp/MasterMaintenanceTest_InverseOffersFilters_IriSpo/meta'
>> 3: I1114 22:23:54.153776 15908 status_update_manager.cpp:203] Recovering
>> status update manager
>> 3: I1114 22:23:54.154068 15904 slave.cpp:6432] Finished recovery
>> 3: I1114 22:23:54.154662 15909 status_update_manager.cpp:177] Pausing
>> sending status updates
>> 3: I1114 22:23:54.154690 15903 slave.cpp:1007] New master detected at
>> master@172.17.0.2:44272
>> 3: I1114 22:23:54.154803 15903 slave.cpp:1042] Detecting new master
>> 3: I1114 22:23:54.160048 15916 slave.cpp:1069] Authenticating with master
>> master@172.17.0.2:44272
>> 3: I1114 22:23:54.160112 15916 slave.cpp:1078] Using default CRAM-MD5
>> authenticatee
>> 3: I1114 22:23:54.160331 15915 authenticatee.cpp:121] Creating new client
>> SASL connection
>> 3: I1114 22:23:54.160604 15905 master.cpp:8285] Authenticating slave(96)@
>> 172.17.0.2:44272
>> 3: I1114 22:23:54.160701 15902 authenticator.cpp:414] Starting
>> authentication session for crammd5-authenticatee(202)@172.17.0.2:44272
>> 3: I1114 22:23:54.160903 15913 authenticator.cpp:98] Creating new server
>> SASL connection
>> 3: I1114 22:23:54.161093 15912 authenticatee.cpp:213] Received SASL
>> authentication mechanisms: CRAM-MD5
>> 3: I1114 22:23:54.161116 15912 authenticatee.cpp:239] Attempting to
>> authenticate with mechanism 'CRAM-MD5'
>> 3: I1114 22:23:54.161213 15911 authenticator.cpp:204] Received SASL
>> authentication start
>> 3: I1114 22:23:54.161264 15911 authenticator.cpp:326] Authentication
>> requires more steps
>> 3: I1114 22:23:54.161358 15908 authenticatee.cpp:259] Received SASL
>> authentication step
>> 3: I1114 22:23:54.161485 15906 authenticator.cpp:232] Received SASL
>> authentication step
>> 3: I1114 22:23:54.161520 15906 auxprop.cpp:109] Request to lookup
>> properties for user: 'test-principal' realm: '36cb246a5077' server FQDN:
>> '36cb246a5077' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
>> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
>> 3: I1114 22:23:54.161537 15906 auxprop.cpp:181] Looking up auxiliary
>> property '*userPassword'
>> 3: I1114 22:23:54.161581 15906 auxprop.cpp:181] Looking up auxiliary
>> property '*cmusaslsecretCRAM-MD5'
>> 3: I1114 22:23:54.161613 15906 auxprop.cpp:109] Request to lookup
>> properties for user: 'test-principal' realm: '36cb246a5077' server FQDN:
>> '36cb246a5077' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
>> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
>> 3: I1114 22:23:54.161674 15906 auxprop.cpp:131] Skipping auxiliary
>> property '*userPassword' since SASL_AUXPROP_AUTHZID == true
>> 3: I1114 22:23:54.161689 15906 auxprop.cpp:131] Skipping auxiliary
>> property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
>> 3: I1114 22:23:54.161707 15906 authenticator.cpp:318] Authentication
>> success
>> 3: I1114 22:23:54.161789 15910 authenticatee.cpp:299] Authentication
>> success
>> 3: I1114 22:23:54.161855 15904 master.cpp:8315] Successfully
>> authenticated principal 'test-principal' at slave(96)@172.17.0.2:44272
>> 3: I1114 22:23:54.161911 15907 authenticator.cpp:432] Authentication
>> session cleanup for crammd5-authenticatee(202)@172.17.0.2:44272
>> 3: I1114 22:23:54.162109 15914 slave.cpp:1161] Successfully authenticated
>> with master master@172.17.0.2:44272
>> 3: I1114 22:23:54.162345 15914 slave.cpp:1682] Will retry registration
>> in 8.042063ms if necessary
>> 3: I1114 22:23:54.162554 15915 master.cpp:6032] Received register agent
>> message from slave(96)@172.17.0.2:44272 (maintenance-host-2)
>> 3: I1114 22:23:54.162588 15915 master.cpp:3870] Authorizing agent with
>> principal 'test-principal'
>> 3: I1114 22:23:54.162969 15902 master.cpp:6092] Authorized registration
>> of agent at slave(96)@172.17.0.2:44272 (maintenance-host-2)
>> 3: I1114 22:23:54.163059 15902 master.cpp:6185] Registering agent at
>> slave(96)@172.17.0.2:44272 (maintenance-host-2) with id
>> e40100c5-1704-4041-8d24-fca0b3944974-S1
>> 3: I1114 22:23:54.163383 15913 registrar.cpp:495] Applied 1 operations in
>> 82281ns; 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,clang,--verbose --disable-libtool-wrappers --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubunt

2017-10-31 Thread Benjamin Mahler
Filed: https://issues.apache.org/jira/browse/MESOS-8152

On Mon, Oct 30, 2017 at 10:59 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=clang,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers%20--enable-libevent%20--enable-
> ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%
> 3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!
> qnode3)&&(!H23)/4382/display/redirect?page=changes>
>
> Changes:
>
> [zhq527725] Ignored the tasks already being killed when killing the task
> group.
>
> [zhq527725] Added a test `DefaultExecutorTest.KillMultipleTasks`.
>
> --
> [...truncated 12.37 MB...]
> I1031 05:59:04.391350  6852 slave.cpp:585] Agent resources:
> [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{
> "name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"ports","ranges":{"range":[{"begin":31000,"end":
> 32000}]},"type":"RANGES"}]
> I1031 05:59:04.391537  6852 slave.cpp:593] Agent attributes: [  ]
> I1031 05:59:04.391551  6852 slave.cpp:602] Agent hostname:
> maintenance-host-2
> I1031 05:59:04.391647  6853 status_update_manager.cpp:177] Pausing sending
> status updates
> I1031 05:59:04.392817  6836 state.cpp:64] Recovering state from
> '/tmp/MasterMaintenanceTest_InverseOffersFilters_WMqZQ5/meta'
> I1031 05:59:04.393013  6834 status_update_manager.cpp:203] Recovering
> status update manager
> I1031 05:59:04.393272  6846 slave.cpp:6340] Finished recovery
> I1031 05:59:04.393781  6832 status_update_manager.cpp:177] Pausing sending
> status updates
> I1031 05:59:04.393793  6840 slave.cpp:999] New master detected at
> master@172.17.0.2:35460
> I1031 05:59:04.393842  6840 slave.cpp:1034] Detecting new master
> I1031 05:59:04.403740  6843 slave.cpp:1061] Authenticating with master
> master@172.17.0.2:35460
> I1031 05:59:04.403794 6843 slave.cpp:1070] Using default CRAM-MD5
> authenticatee
> I1031 05:59:04.403993  6855 authenticatee.cpp:121] Creating new client
> SASL connection
> I1031 05:59:04.404264  6847 master.cpp:8128] Authenticating slave(181)@
> 172.17.0.2:35460
> I1031 05:59:04.404343 6851 authenticator.cpp:414] Starting authentication
> session for crammd5-authenticatee(395)@172.17.0.2:35460
> I1031 05:59:04.404551 6833 authenticator.cpp:98] Creating new server SASL
> connection
> I1031 05:59:04.404736  6853 authenticatee.cpp:213] Received SASL
> authentication mechanisms: CRAM-MD5
> I1031 05:59:04.404767  6853 authenticatee.cpp:239] Attempting to
> authenticate with mechanism 'CRAM-MD5'
> I1031 05:59:04.404875  6842 authenticator.cpp:204] Received SASL
> authentication start
> I1031 05:59:04.404938  6842 authenticator.cpp:326] Authentication requires
> more steps
> I1031 05:59:04.405032  6848 authenticatee.cpp:259] Received SASL
> authentication step
> I1031 05:59:04.405145  6845 authenticator.cpp:232] Received SASL
> authentication step
> I1031 05:59:04.405176  6845 auxprop.cpp:109] Request to lookup properties
> for user: 'test-principal' realm: '5b70b7f9ad53' server FQDN:
> '5b70b7f9ad53' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I1031 05:59:04.405200  6845 auxprop.cpp:181] Looking up auxiliary property
> '*userPassword'
> I1031 05:59:04.405236  6845 auxprop.cpp:181] Looking up auxiliary property
> '*cmusaslsecretCRAM-MD5'
> I1031 05:59:04.405261  6845 auxprop.cpp:109] Request to lookup properties
> for user: 'test-principal' realm: '5b70b7f9ad53' server FQDN:
> '5b70b7f9ad53' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I1031 05:59:04.405275  6845 auxprop.cpp:131] Skipping auxiliary property
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I1031 05:59:04.405285  6845 auxprop.cpp:131] Skipping auxiliary property
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I1031 05:59:04.405302  6845 authenticator.cpp:318] Authentication success
> I1031 05:59:04.405378  6852 authenticatee.cpp:299] Authentication success
> I1031 05:59:04.405427  6850 master.cpp:8158] Successfully authenticated
> principal 'test-principal' at slave(181)@172.17.0.2:35460
> I1031 05:59:04.405454 6836 authenticator.cpp:432] Authentication session
> cleanup for crammd5-authenticatee(395)@172.17.0.2:35460
> I1031 05:59:04.405609 6837 slave.cpp:1153] Successfully authenticated
> with master master@172.17.0.2:35460
> I1031 05:59:04.405791 6837 slave.cpp:1632] Will retry registration in
> 19.947883ms if necessary
> I1031 05:59:04.405939  6846 master.cpp:5964] Received register agent
> message from slave(181)@172.17.0.2:35460 (maintenance-host-2)
> I1031 05:59:04.405973 6846 master.cpp:3859] Authorizing agent with
> principal 'test-principal'
> I1031 05:59:04.406287  6835 master.cpp:6024] Authorized registration of
> agent at slave(181)@172.17.0.2:35460 (maintenance-host-2)
> 

Re: Build failed in Jenkins: Mesos-Reviewbot #19874

2017-10-31 Thread Benjamin Mahler
+alexr

Looks like the destruction order here is wrong and the mock gets called
after being destructed? Alex I seem to remember you mentioning something
like this? Is this a known issue?

On Sun, Oct 22, 2017 at 2:49 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  display/redirect?page=changes>
>
> Changes:
>
> [mpark] Fixed missing `convertResourceFormat` cases in
> `master::updateSlave`.
>
> --
> [...truncated 31.95 MB...]
> I1022 21:49:03.830294 5715 status_update_manager.cpp:531] Cleaning up
> status update stream for task 6714b4b6-7d63-4d61-8d47-35a56b9299f7 of
> framework 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-
> I1022 21:49:03.830319  5703 gc.cpp:90] Scheduling '/tmp/ContentType_
> SchedulerTest_Message_0_7TwIEC/slaves/1d6b43d9-f7e7-
> 422e-bdde-b6aae84cf0f2-S0/frameworks/1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-'
> for gc 6.9039070519days in the future
> I1022 21:49:03.830335  5701 slave.cpp:869] Agent terminating
> I1022 21:49:03.830533  5694 master.cpp:1303] Agent 
> 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-S0
> at slave(816)@172.17.0.4:41816 (da4f1f3e1924) disconnected
> I1022 21:49:03.830559 5694 master.cpp:3336] Disconnecting agent
> 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-S0 at slave(816)@172.17.0.4:41816
> (da4f1f3e1924)
> I1022 21:49:03.830626 5694 master.cpp:3355] Deactivating agent
> 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-S0 at slave(816)@172.17.0.4:41816
> (da4f1f3e1924)
> I1022 21:49:03.830760 5697 hierarchical.cpp:690] Agent
> 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-S0 deactivated
> I1022 21:49:03.833765  5712 master.cpp:1145] Master terminating
> I1022 21:49:03.834528  5699 hierarchical.cpp:626] Removed agent
> 1d6b43d9-f7e7-422e-bdde-b6aae84cf0f2-S0
> [   OK ] ContentType/SchedulerTest.Message/0 (133 ms)
> [ RUN  ] ContentType/SchedulerTest.Message/1
> I1022 21:49:03.841037  5691 cluster.cpp:162] Creating default 'local'
> authorizer
> I1022 21:49:03.843849  5697 master.cpp:445] Master 
> 2f18d0ae-e6dd-4c21-b689-3a682a16da7a
> (da4f1f3e1924) started on 172.17.0.4:41816
> I1022 21:49:03.843871 5697 master.cpp:447] Flags at startup: --acls=""
> --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="true"
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/kghDlQ/credentials"
> --filter_gpu_resources="true" --framework_sorter="drf" --help="false"
> --hostname_lookup="true" --http_authenticators="basic" 
> --http_framework_authenticators="basic"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --max_unreachable_tasks_per_framework="1000" --port="5050"
> --quiet="false" --recovery_agent_removal_limit="100%"
> --registry="in_memory" --registry_fetch_timeout="1mins"
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks"
> --registry_max_agent_count="102400" --registry_store_timeout="100secs"
> --registry_strict="false" --root_submissions="true" --user_sorter="drf"
> --version="false" --webui_dir="/mesos/mesos-1.5.0/_inst/share/mesos/webui"
> --work_dir="/tmp/kghDlQ/master" --zk_session_timeout="10secs"
> I1022 21:49:03.844151  5697 master.cpp:496] Master only allowing
> authenticated frameworks to register
> I1022 21:49:03.844162  5697 master.cpp:502] Master only allowing
> authenticated agents to register
> I1022 21:49:03.844167  5697 master.cpp:508] Master only allowing
> authenticated HTTP frameworks to register
> I1022 21:49:03.844173  5697 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/kghDlQ/credentials'
> I1022 21:49:03.844461  5697 master.cpp:552] Using default 'crammd5'
> authenticator
> I1022 21:49:03.844617  5697 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-master-readonly'
> I1022 21:49:03.844780  5697 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-master-readwrite'
> I1022 21:49:03.844894  5697 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-master-scheduler'
> I1022 21:49:03.845003  5697 master.cpp:631] Authorization enabled
> I1022 21:49:03.845147  5702 hierarchical.cpp:171] Initialized hierarchical
> allocator process
> I1022 21:49:03.845190  5703 whitelist_watcher.cpp:77] No whitelist given
> I1022 21:49:03.848208  5692 master.cpp:2198] Elected as the leading master!
> I1022 21:49:03.848249  5692 master.cpp:1687] Recovering from registrar
> I1022 21:49:03.848423  5693 registrar.cpp:347] Recovering registrar
> I1022 21:49:03.849054  5693 registrar.cpp:391] Successfully fetched the
> registry (0B) in 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,clang,--verbose --disable-libtool-wrappers --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubunt

2017-10-27 Thread Benjamin Mahler
Captured by https://issues.apache.org/jira/browse/MESOS-7972

Posted a fix here: https://reviews.apache.org/r/63391/

On Thu, Oct 26, 2017 at 4:26 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=clang,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers%20--enable-libevent%20--enable-
> ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%
> 3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!
> qnode3)&&(!H23)/4368/display/redirect?page=changes>
>
> Changes:
>
> [jpeach] Closed a  tag on the documentation landing page.
>
> --
> [...truncated 33.16 MB...]
> I1026 11:26:33.976655 17088 slave.cpp:6310] Finished recovery
> I1026 11:26:33.979688 17102 process.cpp:3938] Handling HTTP event for
> process 'slave(828)' with path: '/slave(828)/containers'
> I1026 11:26:33.981986 17087 http.cpp:1185] HTTP GET for
> /slave(828)/containers from 172.17.0.2:44498
> I1026 11:26:33.982077 17087 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/containers' endpoint
> I1026 11:26:33.990456  6832 slave.cpp:869] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2 (40 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.UnauthorizedRequest/0
> I1026 11:26:34.001971  6832 containerizer.cpp:301] Using isolation {
> environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
> W1026 11:26:34.002554  6832 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W1026 11:26:34.002670  6832 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1026 11:26:34.002707  6832 provisioner.cpp:255] Using default backend
> 'copy'
> I1026 11:26:34.006206 17101 slave.cpp:254] Mesos agent started on (829)@
> 172.17.0.2:44431
> I1026 11:26:34.006242 17101 slave.cpp:255] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; --appc_store_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_zqMo2G/store/appc"
> --authenticate_http_executors="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticatee="crammd5"
> --authentication_backoff_factor="1secs" --authorizer="local"
> --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_zqMo2G/credential" --default_role="*"
> --disallow_sharing_agent_pid_namespace="false"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_zqMo2G/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_reregistration_timeout="2secs"
> --executor_secret_key="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_zqMo2G/executor_secret_key"
> --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_zqMo2G/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_zqMo2G/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.5.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_zqMo2G"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_eaCLfG"
> --zk_session_timeout="10secs"
> I1026 11:26:34.006855 17101 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_zqMo2G/credential'
> I1026 11:26:34.007092 17101 slave.cpp:287] 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,gcc,--verbose --disable-libtool-wrappers --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-

2017-10-25 Thread Benjamin Mahler
Updated https://issues.apache.org/jira/browse/MESOS-1553.

On Wed, Oct 25, 2017 at 9:49 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers%20--enable-libevent%20--enable-
> ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%
> 3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!
> qnode3)&&(!H23)/4367/display/redirect?page=changes>
>
> Changes:
>
> [yujie.jay] Updated glog-0.3.3.patch to build on ARM.
>
> --
> [...truncated 33.20 MB...]
> I1025 16:48:02.947698 12326 slave.cpp:6310] Finished recovery
> I1025 16:48:02.953953 12325 process.cpp:3938] Handling HTTP event for
> process 'slave(828)' with path: '/slave(828)/containers'
> I1025 16:48:02.956698 12319 http.cpp:1185] HTTP GET for
> /slave(828)/containers from 172.17.0.4:48902
> I1025 16:48:02.956957 12319 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/containers' endpoint
> I1025 16:48:02.975440 12320 slave.cpp:869] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2 (64 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.UnauthorizedRequest/0
> I1025 16:48:02.992743  6875 containerizer.cpp:301] Using isolation {
> environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
> W1025 16:48:02.993317  6875 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W1025 16:48:02.993468  6875 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1025 16:48:02.993540  6875 provisioner.cpp:255] Using default backend
> 'copy'
> I1025 16:48:02.999120 12322 slave.cpp:254] Mesos agent started on (829)@
> 172.17.0.4:38599
> I1025 16:48:02.999231 12322 slave.cpp:255] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; --appc_store_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_2e4SyJ/store/appc"
> --authenticate_http_executors="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticatee="crammd5"
> --authentication_backoff_factor="1secs" --authorizer="local"
> --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_2e4SyJ/credential" --default_role="*"
> --disallow_sharing_agent_pid_namespace="false"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_2e4SyJ/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_reregistration_timeout="2secs"
> --executor_secret_key="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_2e4SyJ/executor_secret_key"
> --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_2e4SyJ/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_2e4SyJ/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.5.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_2e4SyJ"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_zrHrE5"
> --zk_session_timeout="10secs"
> I1025 16:48:02.999686 12322 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_2e4SyJ/credential'
> I1025 16:48:03.15 12322 slave.cpp:287] Agent using credential for:
> test-principal
> I1025 16:48:03.000102 

Re: Build failed in Jenkins: Mesos-Buildbot » cmake,gcc,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4359

2017-10-24 Thread Benjamin Mahler
This looks like a timeout:

3: I1024 04:12:32.741120 17193 replica.cpp:540] Replica received write
request for position 1 from __req_res__(1523)@172.17.0.2:51271
3: I1024 04:12:32.741189 17201 replica.cpp:540] Replica received write
request for position 1 from __req_res__(1524)@172.17.0.2:51271
3: I1024 04:12:32.741619 17201 leveldb.cpp:341] Persisting action (193
bytes) to leveldb took 388173ns
3: I1024 04:12:32.741619 17193 leveldb.cpp:341] Persisting action (193
bytes) to leveldb took 461690ns
3: *I1024 04:12:43.419662* 17193 replica.cpp:711] Persisted action APPEND
at position 1
3: *E1024 04:12:42.741019* 17195 registrar.cpp:575] Registrar aborting:
Failed to update registry: Failed to perform store within 10secs
3: *I1024 04:12:32.741649* 17201 replica.cpp:711] Persisted action APPEND
at position 1
3: /mesos/src/tests/registrar_tests.cpp:897: Failure
3: (registry).failure(): Failed to recover registrar: Failed to persist
MasterInfo: Failed to update registry: Failed to perform store within 10secs
3: [  FAILED  ] RegistrarTest.UpdateQuota (10720 ms)

Seems like thread 17201 was starved after getting its log timestamp but
before writing to the log. Probably 10 seconds is too low for apache CI.

I'll bump the existing timeout to match the default of 15 seconds used for
AWAIT_.

On Mon, Oct 23, 2017 at 9:16 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers,
> ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,
> label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)
> &&(!H23)/4359/display/redirect?page=changes>
>
> Changes:
>
> [bmahler] Fixed the flaky MasterTest.IgnoreOldAgentReregistration.
>
> --
> [...truncated 29.94 MB...]
> 3: type: ANY
> 3:   }
> 3:   creator_principals {
> 3: type: NONE
> 3:   }
> 3: }
> 3: " --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="false"
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/2UdZK0/credentials"
> --filter_gpu_resources="true" --framework_sorter="drf" --help="false"
> --hostname_lookup="true" --http_authenticators="basic" 
> --http_framework_authenticators="basic"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --max_unreachable_tasks_per_framework="1000" --port="5050"
> --quiet="false" --recovery_agent_removal_limit="100%"
> --registry="in_memory" --registry_fetch_timeout="1mins"
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks"
> --registry_max_agent_count="102400" --registry_store_timeout="100secs"
> --registry_strict="false" --roles="default-role" --root_submissions="true"
> --user_sorter="drf" --version="false" 
> --webui_dir="/usr/local/share/mesos/webui"
> --work_dir="/tmp/2UdZK0/master" --zk_session_timeout="10secs"
> 3: I1024 04:15:05.211155 17199 master.cpp:498] Master allowing
> unauthenticated frameworks to register
> 3: I1024 04:15:05.211163 17199 master.cpp:502] Master only allowing
> authenticated agents to register
> 3: I1024 04:15:05.211165 17199 master.cpp:508] Master only allowing
> authenticated HTTP frameworks to register
> 3: I1024 04:15:05.211170 17199 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/2UdZK0/credentials'
> 3: I1024 04:15:05.211450 17199 master.cpp:552] Using default 'crammd5'
> authenticator
> 3: I1024 04:15:05.211606 17199 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-readonly'
> 3: I1024 04:15:05.211766 17199 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-readwrite'
> 3: I1024 04:15:05.211912 17199 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-scheduler'
> 3: I1024 04:15:05.212061 17199 master.cpp:631] Authorization enabled
> 3: W1024 04:15:05.212079 17199 master.cpp:694] The '--roles' flag is
> deprecated. This flag will be removed in the future. See the Mesos 0.27
> upgrade notes for more information
> 3: I1024 04:15:05.212311 17192 hierarchical.cpp:171] Initialized
> hierarchical allocator process
> 3: I1024 04:15:05.212327 17191 whitelist_watcher.cpp:77] No whitelist given
> 3: I1024 04:15:05.215448 17189 master.cpp:2198] Elected as the leading
> master!
> 3: I1024 04:15:05.215487 17189 master.cpp:1687] Recovering from registrar
> 3: I1024 04:15:05.215698 17195 registrar.cpp:347] Recovering registrar
> 3: I1024 04:15:05.216517 17195 registrar.cpp:391] Successfully fetched the
> registry (0B) in 0ns
> 3: I1024 

Re: Build failed in Jenkins: Mesos-Buildbot » cmake,gcc,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,centos:7,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4359

2017-10-24 Thread Benjamin Mahler
+gaston, you're assigned to this ticket? Are you still planning to look
into this?

https://issues.apache.org/jira/browse/MESOS-7742

On Mon, Oct 23, 2017 at 9:13 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--disable-libtool-wrappers,
> ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_
> exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!
> H23)/4359/display/redirect?page=changes>
>
> Changes:
>
> [bmahler] Fixed the flaky MasterTest.IgnoreOldAgentReregistration.
>
> --
> [...truncated 29.86 MB...]
> 3:   creator_principals {
> 3: type: NONE
> 3:   }
> 3: }
> 3: " --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="false"
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/E8oHWm/credentials"
> --filter_gpu_resources="true" --framework_sorter="drf" --help="false"
> --hostname_lookup="true" --http_authenticators="basic" 
> --http_framework_authenticators="basic"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --max_unreachable_tasks_per_framework="1000" --port="5050"
> --quiet="false" --recovery_agent_removal_limit="100%"
> --registry="in_memory" --registry_fetch_timeout="1mins"
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks"
> --registry_max_agent_count="102400" --registry_store_timeout="100secs"
> --registry_strict="false" --roles="default-role" --root_submissions="true"
> --user_sorter="drf" --version="false" 
> --webui_dir="/usr/local/share/mesos/webui"
> --work_dir="/tmp/E8oHWm/master" --zk_session_timeout="10secs"
> 3: I1024 04:11:08.909395 15898 master.cpp:498] Master allowing
> unauthenticated frameworks to register
> 3: I1024 04:11:08.909404 15898 master.cpp:502] Master only allowing
> authenticated agents to register
> 3: I1024 04:11:08.909407 15898 master.cpp:508] Master only allowing
> authenticated HTTP frameworks to register
> 3: I1024 04:11:08.909413 15898 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/E8oHWm/credentials'
> 3: I1024 04:11:08.909760 15898 master.cpp:552] Using default 'crammd5'
> authenticator
> 3: I1024 04:11:08.909956 15898 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-readonly'
> 3: I1024 04:11:08.910174 15898 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-readwrite'
> 3: I1024 04:11:08.910356 15898 http.cpp:1045] Creating default 'basic'
> HTTP authenticator for realm 'mesos-master-scheduler'
> 3: I1024 04:11:08.910513 15898 master.cpp:631] Authorization enabled
> 3: W1024 04:11:08.910527 15898 master.cpp:694] The '--roles' flag is
> deprecated. This flag will be removed in the future. See the Mesos 0.27
> upgrade notes for more information
> 3: I1024 04:11:08.910727 15887 hierarchical.cpp:171] Initialized
> hierarchical allocator process
> 3: I1024 04:11:08.910782 15888 whitelist_watcher.cpp:77] No whitelist given
> 3: I1024 04:11:08.914042 15906 master.cpp:2198] Elected as the leading
> master!
> 3: I1024 04:11:08.914069 15906 master.cpp:1687] Recovering from registrar
> 3: I1024 04:11:08.914283 15892 registrar.cpp:347] Recovering registrar
> 3: I1024 04:11:08.915030 15892 registrar.cpp:391] Successfully fetched the
> registry (0B) in 0ns
> 3: I1024 04:11:08.915171 15892 registrar.cpp:495] Applied 1 operations in
> 25118ns; attempting to update the registry
> 3: I1024 04:11:08.915908 15892 registrar.cpp:552] Successfully updated the
> registry in 0ns
> 3: I1024 04:11:08.916079 15892 registrar.cpp:424] Successfully recovered
> registrar
> 3: I1024 04:11:08.916563 15896 hierarchical.cpp:209] Skipping recovery of
> hierarchical allocator: nothing to recover
> 3: I1024 04:11:08.916564 15894 master.cpp:1791] Recovered 0 agents from
> the registry (129B); allowing 10mins for agents to re-register
> 3: W1024 04:11:08.923128 15886 process.cpp:3193] Attempted to spawn
> already running process files@172.17.0.4:38770
> 3: I1024 04:11:08.924201 15886 containerizer.cpp:301] Using isolation {
> environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
> 3: W1024 04:11:08.924837 15886 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> 3: W1024 04:11:08.924991 15886 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> 3: I1024 04:11:08.925025 15886 provisioner.cpp:255] Using default backend
> 'copy'
> 3: I1024 04:11:08.927402 15886 cluster.cpp:448] Creating 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,clang,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4355

2017-10-23 Thread Benjamin Mahler
+jie, gaston, benjamin, gilbert

Looks like there wasn't a ticket for this, created one:
https://issues.apache.org/jira/browse/MESOS-8124

Can one of you look into this?

On Sun, Oct 22, 2017 at 7:04 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=clang,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers,ENVIRONMENT=GLOG_v=1%20MESOS_
> VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(ubuntu)&&(!ubuntu-
> us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23)/4355/display/redirect?page=changes>
>
> Changes:
>
> [bmahler] Added MESOS-7921 to the 1.4.1 CHANGELOG.
>
> [mpark] Fixed missing `convertResourceFormat` cases in
> `master::updateSlave`.
>
> --
> [...truncated 32.23 MB...]
> I1023 02:04:35.459108  3553 process.cpp:3938] Handling HTTP event for
> process 'slave(821)' with path: '/slave(821)/monitor/statistics.json'
> I1023 02:04:35.461323  3556 http.cpp:1185] HTTP GET for
> /slave(821)/monitor/statistics.json from 172.17.0.3:48930
> I1023 02:04:35.461457  3556 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/monitor/statistics.json' endpoint
> I1023 02:04:35.488247  3553 slave.cpp:869] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/1 (119 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2
> I1023 02:04:35.541273  3548 containerizer.cpp:301] Using isolation {
> environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
> W1023 02:04:35.548425  3548 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W1023 02:04:35.548691  3548 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1023 02:04:35.548770  3548 provisioner.cpp:255] Using default backend
> 'copy'
> I1023 02:04:35.555105  3553 slave.cpp:254] Mesos agent started on (822)@
> 172.17.0.3:36880
> I1023 02:04:35.555160  3553 slave.cpp:255] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; --appc_store_dir="/tmp/
> Endpoint_SlaveEndpointTest_AuthorizedRequest_2_bJEKSK/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_bJEKSK/credential" --default_role="*"
> --disallow_sharing_agent_pid_namespace="false"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_bJEKSK/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_reregistration_timeout="2secs"
> --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/
> Endpoint_SlaveEndpointTest_AuthorizedRequest_2_bJEKSK/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_bJEKSK/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.5.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_bJEKSK"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_hAH0Dh"
> --zk_session_timeout="10secs"
> I1023 02:04:35.75  3553 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_bJEKSK/credential'
> I1023 02:04:35.555842  3553 slave.cpp:287] Agent using credential for:
> test-principal
> I1023 02:04:35.555876  3553 credentials.hpp:37] Loading 

Re: Build failed in Jenkins: Mesos-Buildbot » cmake,gcc,--verbose --disable-libtool-wrappers --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)

2017-10-23 Thread Benjamin Mahler
Chatted with greg, looks like Ilya's suggestion in
https://issues.apache.org/jira/browse/MESOS-6985 is a good fix.

Ilya, would you be able to work on the fix?

On Sun, Oct 22, 2017 at 9:58 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers%20--enable-libevent%20--enable-
> ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%
> 3A14.04,label_exp=(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!
> qnode3)&&(!H23)/4357/display/redirect>
>
> --
> [...truncated 7.95 MB...]
> I1023 04:58:23.549506  2758 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1023 04:58:23.549675  2758 http.cpp:1066] Creating default 'jwt' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1023 04:58:23.549998  2758 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-agent-readwrite'
> I1023 04:58:23.550155  2758 http.cpp:1066] Creating default 'jwt' HTTP
> authenticator for realm 'mesos-agent-readwrite'
> I1023 04:58:23.552693  2758 slave.cpp:565] Agent resources:
> [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{
> "name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"ports","ranges":{"range":[{"begin":31000,"end":
> 32000}]},"type":"RANGES"}]
> I1023 04:58:23.552987  2758 slave.cpp:573] Agent attributes: [  ]
> I1023 04:58:23.553000  2758 slave.cpp:582] Agent hostname: 62baec6496c1
> I1023 04:58:23.553277  2766 status_update_manager.cpp:177] Pausing sending
> status updates
> I1023 04:58:23.555351  2770 state.cpp:64] Recovering state from
> '/tmp/SlaveTest_ContainersEndpointNoExecutor_3vooeB/meta'
> I1023 04:58:23.555416  2758 process.cpp:3938] Handling HTTP event for
> process 'slave(167)' with path: '/slave(167)/containers'
> I1023 04:58:23.556282  2775 status_update_manager.cpp:203] Recovering
> status update manager
> I1023 04:58:23.556704  2758 containerizer.cpp:609] Recovering containerizer
> I1023 04:58:23.558595  2776 http.cpp:1185] HTTP GET for
> /slave(167)/containers from 172.17.0.3:34776
> I1023 04:58:23.558727  2776 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/containers' endpoint
> I1023 04:58:23.558964  2763 provisioner.cpp:416] Provisioner recovery
> complete
> I1023 04:58:23.560379  2759 slave.cpp:6295] Finished recovery
> I1023 04:58:23.561076  2759 slave.cpp:6477] Querying resource estimator
> for oversubscribable resources
> I1023 04:58:23.562031  2759 slave.cpp:971] New master detected at
> master@172.17.0.3:42906
> I1023 04:58:23.562068  2775 status_update_manager.cpp:177] Pausing sending
> status updates
> I1023 04:58:23.562137  2759 slave.cpp:1006] Detecting new master
> I1023 04:58:23.562305  2759 slave.cpp:6491] Received oversubscribable
> resources {} from the resource estimator
> I1023 04:58:23.565470  2761 slave.cpp:843] Agent terminating
> I1023 04:58:23.573655  2754 master.cpp:1160] Master terminating
> [   OK ] SlaveTest.ContainersEndpointNoExecutor (64 ms)
> [ RUN  ] SlaveTest.ContainersEndpoint
> I1023 04:58:23.581178  2754 cluster.cpp:162] Creating default 'local'
> authorizer
> I1023 04:58:23.584719  2768 master.cpp:442] Master 
> 0c9daaa8-536e-4970-ab54-98b5dcff334e
> (62baec6496c1) started on 172.17.0.3:42906
> I1023 04:58:23.584751  2768 master.cpp:444] Flags at startup: --acls=""
> --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="true"
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/zCMnpS/credentials"
> --filter_gpu_resources="true" --framework_sorter="drf" --help="false"
> --hostname_lookup="true" --http_authenticators="basic" 
> --http_framework_authenticators="basic"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --max_unreachable_tasks_per_framework="1000" --port="5050"
> --quiet="false" --recovery_agent_removal_limit="100%"
> --registry="in_memory" --registry_fetch_timeout="1mins"
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks"
> --registry_max_agent_count="102400" --registry_store_timeout="100secs"
> --registry_strict="false" --root_submissions="true" --user_sorter="drf"
> --version="false" --webui_dir="/usr/local/share/mesos/webui"
> --work_dir="/tmp/zCMnpS/master" --zk_session_timeout="10secs"
> I1023 04:58:23.585140  2768 master.cpp:494] Master only allowing
> authenticated frameworks to register
> I1023 04:58:23.585150  2768 master.cpp:508] Master only 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,gcc,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,centos:7,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4357

2017-10-23 Thread Benjamin Mahler
Is anyone digging into these? I see no activity on
https://issues.apache.org/jira/browse/MESOS-7440.

On Sun, Oct 22, 2017 at 10:40 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers,ENVIRONMENT=GLOG_v=1%20MESOS_
> VERBOSE=1,OS=centos%3A7,label_exp=(ubuntu)&&(!ubuntu-us1)&&(
> !ubuntu-eu2)&&(!qnode3)&&(!H23)/4357/display/redirect>
>
> --
> [...truncated 30.86 MB...]
> I1023 05:40:26.778672 6168 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1023 05:40:26.778993  6168 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-agent-readwrite'
> I1023 05:40:26.781251  6168 slave.cpp:565] Agent resources:
> [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{
> "name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"
> name":"ports","ranges":{"range":[{"begin":31000,"end":
> 32000}]},"type":"RANGES"}]
> I1023 05:40:26.781694  6168 slave.cpp:573] Agent attributes: [  ]
> I1023 05:40:26.781718  6168 slave.cpp:582] Agent hostname: 7462a85b6824
> I1023 05:40:26.781955  6163 status_update_manager.cpp:177] Pausing sending
> status updates
> I1023 05:40:26.784184  6162 state.cpp:64] Recovering state from
> '/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_74RDmQ/meta'
> I1023 05:40:26.784605  6163 status_update_manager.cpp:203] Recovering
> status update manager
> I1023 05:40:26.784904  6164 containerizer.cpp:609] Recovering containerizer
> I1023 05:40:26.787271  6168 provisioner.cpp:416] Provisioner recovery
> complete
> I1023 05:40:26.787889  6164 slave.cpp:6295] Finished recovery
> I1023 05:40:26.788600  6164 slave.cpp:6477] Querying resource estimator
> for oversubscribable resources
> I1023 05:40:26.789077  6166 slave.cpp:6491] Received oversubscribable
> resources {} from the resource estimator
> I1023 05:40:26.793305  6168 process.cpp:3938] Handling HTTP event for
> process 'slave(796)' with path: '/slave(796)/containers'
> I1023 05:40:26.795511  6162 http.cpp:1185] HTTP GET for
> /slave(796)/containers from 172.17.0.3:53220
> I1023 05:40:26.795629  6162 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/containers' endpoint
> I1023 05:40:26.801371  6165 slave.cpp:843] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2 (48 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.UnauthorizedRequest/0
> I1023 05:40:26.819031  6143 containerizer.cpp:246] Using isolation:
> posix/cpu,posix/mem,filesystem/posix,network/cni,environment_secret
> W1023 05:40:26.819830  6143 backend.cpp:76] Failed to create 'overlay'
> backend: OverlayBackend requires root privileges
> W1023 05:40:26.820036  6143 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1023 05:40:26.820083  6143 provisioner.cpp:255] Using default backend
> 'copy'
> I1023 05:40:26.825731  6163 slave.cpp:250] Mesos agent started on (797)@
> 172.17.0.3:33485
> I1023 05:40:26.825773 6163 slave.cpp:251] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; --appc_store_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_wurBF9/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_wurBF9/credential" --default_role="*"
> --disallow_sharing_agent_pid_namespace="false"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_wurBF9/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_reregistration_timeout="2secs"
> --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/
> Endpoint_SlaveEndpointTest_UnauthorizedRequest_0_wurBF9/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> UnauthorizedRequest_0_wurBF9/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,clang,--verbose --disable-libtool-wrappers,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(ubuntu)&&(!ubuntu-us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23) #4358

2017-10-23 Thread Benjamin Mahler
Posted a fix here: https://reviews.apache.org/r/63224/

On Mon, Oct 23, 2017 at 1:16 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=clang,CONFIGURATION=--verbose%20--
> disable-libtool-wrappers,ENVIRONMENT=GLOG_v=1%20MESOS_
> VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(ubuntu)&&(!ubuntu-
> us1)&&(!ubuntu-eu2)&&(!qnode3)&&(!H23)/4358/display/redirect?page=changes>
>
> Changes:
>
> [yujie.jay] Fixed GPU tests.
>
> --
> [...truncated 32.22 MB...]
> I1023 20:16:13.263487  3553 http.cpp:1185] HTTP GET for
> /slave(821)/monitor/statistics.json from 172.17.0.4:57338
> I1023 20:16:13.263748  3553 http.cpp:976] Authorizing principal
> 'test-principal' to GET the '/monitor/statistics.json' endpoint
> I1023 20:16:13.267997  3546 slave.cpp:869] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/1 (115 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2
> I1023 20:16:13.318758  3546 containerizer.cpp:301] Using isolation {
> environment_secret, posix/cpu, posix/mem, filesystem/posix, network/cni }
> W1023 20:16:13.319428  3546 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W1023 20:16:13.319738  3546 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1023 20:16:13.319803  3546 provisioner.cpp:255] Using default backend
> 'copy'
> I1023 20:16:13.325991  3549 slave.cpp:254] Mesos agent started on (822)@
> 172.17.0.4:39410
> I1023 20:16:13.326225  3549 slave.cpp:255] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; --appc_store_dir="/tmp/
> Endpoint_SlaveEndpointTest_AuthorizedRequest_2_w0EC95/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_w0EC95/credential" --default_role="*"
> --disallow_sharing_agent_pid_namespace="false"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_w0EC95/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_reregistration_timeout="2secs"
> --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/
> Endpoint_SlaveEndpointTest_AuthorizedRequest_2_w0EC95/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_w0EC95/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.5.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_w0EC95"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_fAqZtL"
> --zk_session_timeout="10secs"
> I1023 20:16:13.327090  3549 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_w0EC95/credential'
> I1023 20:16:13.327663  3549 slave.cpp:287] Agent using credential for:
> test-principal
> I1023 20:16:13.327831  3549 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_w0EC95/http_credentials'
> I1023 20:16:13.328894  3549 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1023 20:16:13.329382  3549 http.cpp:1045] Creating default 'basic' HTTP
> authenticator for realm 

Re: Build failed in Jenkins: Mesos-Buildbot » cmake,clang,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2) #3218

2017-02-04 Thread Benjamin Mahler
+Kevin

Is this a known issue?
[  FAILED  ] IOSwitchboardTest.RecoverThenKillSwitchboardContainerDestroyed

On Fri, Feb 3, 2017 at 6:41 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  cmake,COMPILER=clang,CONFIGURATION=--verbose%20--
> enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%
> 20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%
> 7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/3218/changes>
>
> Changes:
>
> [xujyan] Fix handling of total resources in role and quote role sorters.
>
> [xujyan] Updated the persistent volume test framework to include shared
> volumes.
>
> [xujyan] Fix to potential dangling pointer in `batch()`.
>
> --
> [...truncated 177037 lines...]
> I0204 02:40:46.234028 25325 status_update_manager.cpp:177] Pausing sending
> status updates
> I0204 02:40:46.234318 25334 state.cpp:60] Recovering state from
> '/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_1_MaAVKg/meta'
> I0204 02:40:46.234459 25325 status_update_manager.cpp:203] Recovering
> status update manager
> I0204 02:40:46.234542 25322 containerizer.cpp:599] Recovering containerizer
> I0204 02:40:46.235121 25322 provisioner.cpp:410] Provisioner recovery
> complete
> I0204 02:40:46.235280 25322 slave.cpp:5422] Finished recovery
> I0204 02:40:46.235682 25322 slave.cpp:5596] Querying resource estimator
> for oversubscribable resources
> I0204 02:40:46.235846 25322 slave.cpp:5610] Received oversubscribable
> resources {} from the resource estimator
> I0204 02:40:46.236680 25322 process.cpp:3697] Handling HTTP event for
> process 'slave(689)' with path: '/slave(689)/monitor/statistics.json'
> I0204 02:40:46.237081 25322 http.cpp:871] Authorizing principal
> 'test-principal' to GET the '/monitor/statistics.json' endpoint
> I0204 02:40:46.238281 25319 slave.cpp:801] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/1 (14 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2
> I0204 02:40:46.245890 25319 containerizer.cpp:220] Using isolation:
> posix/cpu,posix/mem,filesystem/posix,network/cni
> W0204 02:40:46.246248 25319 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W0204 02:40:46.246301 25319 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I0204 02:40:46.246345 25319 provisioner.cpp:249] Using default backend
> 'copy'
> I0204 02:40:46.247617 25327 slave.cpp:209] Mesos agent started on (690)@
> 172.17.0.2:48172
> I0204 02:40:46.247668 25327 slave.cpp:210] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; 
> --appc_store_dir="/tmp/mesos/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_3AtHHN/credential" --default_role="*"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_3AtHHN/fetch" --fetcher_cache_size="2GB"
> --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
> --hadoop_home="" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_3AtHHN/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/build/src" --logbufsecs="0"
> --logging_level="INFO" --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_3AtHHN"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" 

Re: Build failed in Jenkins: Mesos-Reviewbot #16979

2017-02-04 Thread Benjamin Mahler
+neil

Is this test flaky?
[  FAILED  ] ReconciliationTest.UnknownTaskPartitionAware

It also appears that other reconciliation tests failed in other runs of the
reviewbot.

On Sat, Feb 4, 2017 at 12:27 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> --
> [...truncated 176907 lines...]
> I0204 20:27:41.754417 30284 status_update_manager.cpp:203] Recovering
> status update manager
> I0204 20:27:41.754622 30293 containerizer.cpp:599] Recovering containerizer
> I0204 20:27:41.755957 30283 provisioner.cpp:410] Provisioner recovery
> complete
> I0204 20:27:41.756379 30281 slave.cpp:5499] Finished recovery
> I0204 20:27:41.756902 30281 slave.cpp:5673] Querying resource estimator
> for oversubscribable resources
> I0204 20:27:41.757135 30287 slave.cpp:5687] Received oversubscribable
> resources {} from the resource estimator
> I0204 20:27:41.759475 30283 process.cpp:3697] Handling HTTP event for
> process 'slave(694)' with path: '/slave(694)/monitor/statistics.json'
> I0204 20:27:41.760957 30281 http.cpp:871] Authorizing principal
> 'test-principal' to GET the '/monitor/statistics.json' endpoint
> I0204 20:27:41.764798 30283 slave.cpp:803] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/1 (29 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2
> I0204 20:27:41.775665 30262 containerizer.cpp:220] Using isolation:
> posix/cpu,posix/mem,filesystem/posix,network/cni
> W0204 20:27:41.776273 30262 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W0204 20:27:41.776432 30262 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I0204 20:27:41.776489 30262 provisioner.cpp:249] Using default backend
> 'copy'
> I0204 20:27:41.780480 30286 slave.cpp:211] Mesos agent started on (695)@
> 172.17.0.3:57898
> I0204 20:27:41.780516 30286 slave.cpp:212] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; 
> --appc_store_dir="/tmp/mesos/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_nshf1o/credential" --default_role="*"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_nshf1o/fetch" --fetcher_cache_size="2GB"
> --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
> --hadoop_home="" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_nshf1o/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.2.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_nshf1o"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_rrEyWd"
> I0204 20:27:41.781525 30286 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_nshf1o/credential'
> I0204 20:27:41.781770 30286 slave.cpp:354] Agent using credential for:
> test-principal
> I0204 20:27:41.781806 30286 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_nshf1o/http_credentials'
> I0204 20:27:41.78 30286 http.cpp:919] Using default 'basic' HTTP
> 

Re: Build failed in Jenkins: Mesos-Buildbot » autotools,gcc,--verbose,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2) #3220

2017-02-04 Thread Benjamin Mahler
+neil

Is this test flaky? I wasn't able to grab the logs since jenkins appears to
be non-responsive at the moment.

[  FAILED  ] RegistrarTest.PruneUnreachable

On Sat, Feb 4, 2017 at 3:03 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  autotools,COMPILER=gcc,CONFIGURATION=--verbose,
> ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,
> label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-eu2)/3220/changes>
>
> Changes:
>
> [xujyan] Use the stout ELF parser to implement ldd.
>
> [xujyan] Add some simple ldd() tests.
>
> [xujyan] Use the stout ELF parser to collect Linux rootfs files.
>
> [benjamin.hindman] Replaced recursive implementation in http::Connection
> with loop.
>
> [benjamin.hindman] Re-enabled disabled test.
>
> [benjamin.hindman] Replaced (another) recursive implementation with
> process::loop.
>
> [benjamin.hindman] Re-enabled a test.
>
> [benjamin.hindman] Introduced process::after.
>
> [benjamin.hindman] Used process::after instead of process::RateLimiter.
>
> --
> [...truncated 177912 lines...]
> I0204 11:00:49.951364 30284 status_update_manager.cpp:203] Recovering
> status update manager
> I0204 11:00:49.951576 30279 containerizer.cpp:599] Recovering containerizer
> I0204 11:00:49.952945 30290 provisioner.cpp:410] Provisioner recovery
> complete
> I0204 11:00:49.953316 30280 slave.cpp:5499] Finished recovery
> I0204 11:00:49.953835 30280 slave.cpp:5673] Querying resource estimator
> for oversubscribable resources
> I0204 11:00:49.954129 30279 slave.cpp:5687] Received oversubscribable
> resources {} from the resource estimator
> I0204 11:00:49.957427 30284 process.cpp:3697] Handling HTTP event for
> process 'slave(694)' with path: '/slave(694)/monitor/statistics.json'
> I0204 11:00:49.958652 30289 http.cpp:871] Authorizing principal
> 'test-principal' to GET the '/monitor/statistics.json' endpoint
> I0204 11:00:49.962405 30280 slave.cpp:803] Agent terminating
> [   OK ] Endpoint/SlaveEndpointTest.AuthorizedRequest/1 (31 ms)
> [ RUN  ] Endpoint/SlaveEndpointTest.AuthorizedRequest/2
> I0204 11:00:49.972606 30259 containerizer.cpp:220] Using isolation:
> posix/cpu,posix/mem,filesystem/posix,network/cni
> W0204 11:00:49.973045 30259 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges
> W0204 11:00:49.973139 30259 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I0204 11:00:49.973171 30259 provisioner.cpp:249] Using default backend
> 'copy'
> I0204 11:00:49.976068 30278 slave.cpp:211] Mesos agent started on (695)@
> 172.17.0.4:47679
> I0204 11:00:49.976094 30278 slave.cpp:212] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; 
> --appc_store_dir="/tmp/mesos/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_WNyVh6/credential" --default_role="*"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_WNyVh6/fetch" --fetcher_cache_size="2GB"
> --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
> --hadoop_home="" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_2_WNyVh6/http_credentials" 
> --http_heartbeat_interval="30secs"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.2.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_2_WNyVh6"
> 

Re: Build failed in Jenkins: Mesos » cmake,gcc,--verbose,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2888

2016-11-04 Thread Benjamin Mahler
It looks like the builds running on 'ubuntu-eu2' fail from OOMing, but the
ones that run on 'H2' work.

https://builds.apache.org/computer/ubuntu-eu2/
https://builds.apache.org/computer/H1/

Vinod does this have to do with our label update? Unfortunately I can't
seem to find any information on how much memory these jenkins agents have
available for builds.


On Fri, Nov 4, 2016 at 10:55 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  COMPILER=gcc,CONFIGURATION=--verbose,ENVIRONMENT=GLOG_v=1%
> 20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%
> 7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-6)/2888/changes>
>
> Changes:
>
> [xujyan] Allow CREATE of shared persistent volumes based on capability.
>
> --
> [...truncated 10140 lines...]
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 94%] Building CXX object src/CMakeFiles/mesos-1.2.0.
> dir/slave/containerizer/mesos/isolators/network/cni/cni.cpp.o
> cd /mesos/build/src && /usr/bin/g++   -DBUILD_DATE="\"2016-3-3 10:20\""
> -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\" -DBUILD_TIME=\"100\"
> -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1 -DLIBDIR=\"/usr/local/libmesos\"
> -DPICOJSON_USE_INT64 -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.2.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> -I/mesos/src -I/mesos/3rdparty/stout/include -I/usr/include/apr-1.0
> -I/mesos/build/3rdparty/boost-1.53.0/src/boost-1.53.0
> -I/mesos/build/3rdparty/elfio-3.2/src/elfio-3.2
> -I/mesos/build/3rdparty/glog-0.3.3/src/glog-0.3.3-lib/lib/include
> -I/mesos/build/3rdparty/nvml-352.79/src/nvml-352.79
> -I/mesos/build/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/include
> -I/usr/include/subversion-1 -I/mesos/src/src 
> -I/mesos/3rdparty/libprocess/include
> -I/mesos/build/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2
> -I/mesos/build/3rdparty/libev-4.22/src/libev-4.22 -I/mesos/build/3rdparty/
> zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/include -I/mesos/build/3rdparty/
> zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/generated
> -I/mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4/include-fPIC -o
> CMakeFiles/mesos-1.2.0.dir/slave/containerizer/mesos/
> isolators/network/cni/cni.cpp.o -c /mesos/src/slave/
> containerizer/mesos/isolators/network/cni/cni.cpp
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 94%] Building CXX object src/CMakeFiles/mesos-1.2.0.
> dir/slave/containerizer/mesos/isolators/volume/image.cpp.o
> cd /mesos/build/src && /usr/bin/g++   -DBUILD_DATE="\"2016-3-3 10:20\""
> -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\" -DBUILD_TIME=\"100\"
> -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1 -DLIBDIR=\"/usr/local/libmesos\"
> -DPICOJSON_USE_INT64 -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.2.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> -I/mesos/src -I/mesos/3rdparty/stout/include -I/usr/include/apr-1.0
> -I/mesos/build/3rdparty/boost-1.53.0/src/boost-1.53.0
> -I/mesos/build/3rdparty/elfio-3.2/src/elfio-3.2
> -I/mesos/build/3rdparty/glog-0.3.3/src/glog-0.3.3-lib/lib/include
> -I/mesos/build/3rdparty/nvml-352.79/src/nvml-352.79
> -I/mesos/build/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/include
> -I/usr/include/subversion-1 -I/mesos/src/src 
> -I/mesos/3rdparty/libprocess/include
> -I/mesos/build/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2
> -I/mesos/build/3rdparty/libev-4.22/src/libev-4.22 -I/mesos/build/3rdparty/
> zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/include -I/mesos/build/3rdparty/
> zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/generated
> -I/mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4/include-fPIC -o
> CMakeFiles/mesos-1.2.0.dir/slave/containerizer/mesos/isolators/volume/image.cpp.o
> -c /mesos/src/slave/containerizer/mesos/isolators/volume/image.cpp
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 94%] Building CXX object src/CMakeFiles/mesos-1.2.0.
> dir/slave/containerizer/mesos/provisioner/backends/aufs.cpp.o
> cd /mesos/build/src && /usr/bin/g++   -DBUILD_DATE="\"2016-3-3 10:20\""
> -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\" -DBUILD_TIME=\"100\"
> -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1 -DLIBDIR=\"/usr/local/libmesos\"
> -DPICOJSON_USE_INT64 -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.2.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> 

Re: Build failed in Jenkins: Mesos » autotools,clang,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2873

2016-11-03 Thread Benjamin Mahler
https://issues.apache.org/jira/browse/MESOS-6544
https://issues.apache.org/jira/browse/MESOS-6545

On Thu, Nov 3, 2016 at 4:56 PM, Benjamin Mahler <bmah...@apache.org> wrote:

> This test uses two mock executors across two agents, which means the
> agents may concurrently call into the TestContainerizer to launch the
> container. The TestContainerizer is not a Process and so we may
> concurrently modify its data structures, I suspect that the following
> executes concurrently and leads to the crash:
>
> https://github.com/apache/mesos/blob/44242c058158727ce013bd51764368
> f5e120ee75/src/tests/containerizer.cpp#L131
>
> On Wed, Nov 2, 2016 at 6:41 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <https://builds.apache.org/job/Mesos/BUILDTOOL=autotools,COM
>> PILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20
>> --enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubunt
>> u%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(
>> !ubuntu-6)/2873/changes>
>>
>> Changes:
>>
>> [josephwu] Fixed some markdown whitespace style.
>>
>> [josephwu] Fixed a typo in `gpu-support.md`.
>>
>> [josephwu] Fixed FD leak due to exiting early in Mesos fetcher.
>>
>> --
>> [...truncated 78174 lines...]
>> I1103 01:40:55.530350 29098 slave.cpp:974] Authenticating with master
>> master@172.17.0.2:58302
>> I1103 01:40:55.530432 29098 slave.cpp:985] Using default CRAM-MD5
>> authenticatee
>> I1103 01:40:55.530627 29098 slave.cpp:947] Detecting new master
>> I1103 01:40:55.530675 29108 authenticatee.cpp:121] Creating new client
>> SASL connection
>> I1103 01:40:55.530743 29098 slave.cpp:5587] Received oversubscribable
>> resources {} from the resource estimator
>> I1103 01:40:55.530961 29099 master.cpp:6742] Authenticating slave(150)@
>> 172.17.0.2:58302
>> I1103 01:40:55.531070 29112 authenticator.cpp:414] Starting
>> authentication session for crammd5-authenticatee(357)@172.17.0.2:58302
>> I1103 01:40:55.531328 29106 authenticator.cpp:98] Creating new server
>> SASL connection
>> I1103 01:40:55.531561 29108 authenticatee.cpp:213] Received SASL
>> authentication mechanisms: CRAM-MD5
>> I1103 01:40:55.531604 29108 authenticatee.cpp:239] Attempting to
>> authenticate with mechanism 'CRAM-MD5'
>> I1103 01:40:55.531713 29101 authenticator.cpp:204] Received SASL
>> authentication start
>> I1103 01:40:55.531805 29101 authenticator.cpp:326] Authentication
>> requires more steps
>> I1103 01:40:55.531921 29108 authenticatee.cpp:259] Received SASL
>> authentication step
>> I1103 01:40:55.532120 29101 authenticator.cpp:232] Received SASL
>> authentication step
>> I1103 01:40:55.532155 29101 auxprop.cpp:109] Request to lookup properties
>> for user: 'test-principal' realm: '3a1c598ce334' server FQDN:
>> '3a1c598ce334' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
>> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
>> I1103 01:40:55.532179 29101 auxprop.cpp:181] Looking up auxiliary
>> property '*userPassword'
>> I1103 01:40:55.532233 29101 auxprop.cpp:181] Looking up auxiliary
>> property '*cmusaslsecretCRAM-MD5'
>> I1103 01:40:55.532266 29101 auxprop.cpp:109] Request to lookup properties
>> for user: 'test-principal' realm: '3a1c598ce334' server FQDN:
>> '3a1c598ce334' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
>> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
>> I1103 01:40:55.532289 29101 auxprop.cpp:131] Skipping auxiliary property
>> '*userPassword' since SASL_AUXPROP_AUTHZID == true
>> I1103 01:40:55.532305 29101 auxprop.cpp:131] Skipping auxiliary property
>> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
>> I1103 01:40:55.532335 29101 authenticator.cpp:318] Authentication success
>> I1103 01:40:55.532413 29110 authenticatee.cpp:299] Authentication success
>> I1103 01:40:55.532467 29108 master.cpp:6772] Successfully authenticated
>> principal 'test-principal' at slave(150)@172.17.0.2:58302
>> I1103 01:40:55.532536 29111 authenticator.cpp:432] Authentication session
>> cleanup for crammd5-authenticatee(357)@172.17.0.2:58302
>> I1103 01:40:55.532755 29098 slave.cpp:1069] Successfully authenticated
>> with master master@172.17.0.2:58302
>> I1103 01:40:55.532997 29098 slave.cpp:1483] Will retry registration in
>> 12.590371ms if necessary
>> I1103 01:40:55.533179 29108 master.cpp:5151] Registering agent at
>> slave(150)@172.17.0.2:58302 (maintenance-host-2) with id
>> 3167a687-904b-4b57-bc0f-91b67dc7e41d-S1
>> I1103 01:40:55.533572 291

Re: Build failed in Jenkins: Mesos » autotools,clang,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2873

2016-11-03 Thread Benjamin Mahler
This test uses two mock executors across two agents, which means the agents
may concurrently call into the TestContainerizer to launch the container.
The TestContainerizer is not a Process and so we may concurrently modify
its data structures, I suspect that the following executes concurrently and
leads to the crash:

https://github.com/apache/mesos/blob/44242c058158727ce013bd51764368f5e120ee75/src/tests/containerizer.cpp#L131

On Wed, Nov 2, 2016 at 6:41 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%
> 20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=
> ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-
> us1)&&(!ubuntu-6)/2873/changes>
>
> Changes:
>
> [josephwu] Fixed some markdown whitespace style.
>
> [josephwu] Fixed a typo in `gpu-support.md`.
>
> [josephwu] Fixed FD leak due to exiting early in Mesos fetcher.
>
> --
> [...truncated 78174 lines...]
> I1103 01:40:55.530350 29098 slave.cpp:974] Authenticating with master
> master@172.17.0.2:58302
> I1103 01:40:55.530432 29098 slave.cpp:985] Using default CRAM-MD5
> authenticatee
> I1103 01:40:55.530627 29098 slave.cpp:947] Detecting new master
> I1103 01:40:55.530675 29108 authenticatee.cpp:121] Creating new client
> SASL connection
> I1103 01:40:55.530743 29098 slave.cpp:5587] Received oversubscribable
> resources {} from the resource estimator
> I1103 01:40:55.530961 29099 master.cpp:6742] Authenticating slave(150)@
> 172.17.0.2:58302
> I1103 01:40:55.531070 29112 authenticator.cpp:414] Starting authentication
> session for crammd5-authenticatee(357)@172.17.0.2:58302
> I1103 01:40:55.531328 29106 authenticator.cpp:98] Creating new server SASL
> connection
> I1103 01:40:55.531561 29108 authenticatee.cpp:213] Received SASL
> authentication mechanisms: CRAM-MD5
> I1103 01:40:55.531604 29108 authenticatee.cpp:239] Attempting to
> authenticate with mechanism 'CRAM-MD5'
> I1103 01:40:55.531713 29101 authenticator.cpp:204] Received SASL
> authentication start
> I1103 01:40:55.531805 29101 authenticator.cpp:326] Authentication requires
> more steps
> I1103 01:40:55.531921 29108 authenticatee.cpp:259] Received SASL
> authentication step
> I1103 01:40:55.532120 29101 authenticator.cpp:232] Received SASL
> authentication step
> I1103 01:40:55.532155 29101 auxprop.cpp:109] Request to lookup properties
> for user: 'test-principal' realm: '3a1c598ce334' server FQDN:
> '3a1c598ce334' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I1103 01:40:55.532179 29101 auxprop.cpp:181] Looking up auxiliary property
> '*userPassword'
> I1103 01:40:55.532233 29101 auxprop.cpp:181] Looking up auxiliary property
> '*cmusaslsecretCRAM-MD5'
> I1103 01:40:55.532266 29101 auxprop.cpp:109] Request to lookup properties
> for user: 'test-principal' realm: '3a1c598ce334' server FQDN:
> '3a1c598ce334' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I1103 01:40:55.532289 29101 auxprop.cpp:131] Skipping auxiliary property
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I1103 01:40:55.532305 29101 auxprop.cpp:131] Skipping auxiliary property
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I1103 01:40:55.532335 29101 authenticator.cpp:318] Authentication success
> I1103 01:40:55.532413 29110 authenticatee.cpp:299] Authentication success
> I1103 01:40:55.532467 29108 master.cpp:6772] Successfully authenticated
> principal 'test-principal' at slave(150)@172.17.0.2:58302
> I1103 01:40:55.532536 29111 authenticator.cpp:432] Authentication session
> cleanup for crammd5-authenticatee(357)@172.17.0.2:58302
> I1103 01:40:55.532755 29098 slave.cpp:1069] Successfully authenticated
> with master master@172.17.0.2:58302
> I1103 01:40:55.532997 29098 slave.cpp:1483] Will retry registration in
> 12.590371ms if necessary
> I1103 01:40:55.533179 29108 master.cpp:5151] Registering agent at
> slave(150)@172.17.0.2:58302 (maintenance-host-2) with id
> 3167a687-904b-4b57-bc0f-91b67dc7e41d-S1
> I1103 01:40:55.533572 29112 registrar.cpp:461] Applied 1 operations in
> 94467ns; attempting to update the registry
> I1103 01:40:55.546341 29107 slave.cpp:1483] Will retry registration in
> 36.501523ms if necessary
> I1103 01:40:55.546461 29099 master.cpp:5139] Ignoring register agent
> message from slave(150)@172.17.0.2:58302 (maintenance-host-2) as
> admission is already in progress
> I1103 01:40:55.565403 29097 leveldb.cpp:341] Persisting action (16 bytes)
> to leveldb took 48.099208ms
> I1103 01:40:55.565495 29097 replica.cpp:708] Persisted action TRUNCATE at
> position 4
> I1103 01:40:55.566788 29097 replica.cpp:691] Replica received learned
> notice for position 4 from @0.0.0.0:0
> I1103 01:40:55.583937 29101 slave.cpp:1483] Will retry registration in
> 26.127711ms if necessary
> I1103 

Re: Build failed in Jenkins: Mesos » autotools,gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2863

2016-11-03 Thread Benjamin Mahler
The logs are rotated away, can't capture them.

On Wed, Nov 2, 2016 at 12:41 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%
> 20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=
> centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(
> !ubuntu-6)/2863/changes>
>
> Changes:
>
> [anand] Added MESOS-6527 to CHANGELOG for 1.1.0.
>
> [anand] Added MESOS-6527 to CHANGELOG for 1.0.2.
>
> [anand] Added MESOS-6527 to CHANGELOG for 0.28.3.
>
> --
> [...truncated 231288 lines...]
> W1102 07:41:10.012501 31617 backend.cpp:76] Failed to create 'aufs'
> backend: AufsBackend requires root privileges, but is running as user mesos
> W1102 07:41:10.012681 31617 backend.cpp:76] Failed to create 'bind'
> backend: BindBackend requires root privileges
> I1102 07:41:10.016486 31645 slave.cpp:208] Mesos agent started on (635)@
> 172.17.0.2:58313
> I1102 07:41:10.016526 31645 slave.cpp:209] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://; 
> --appc_store_dir="/tmp/mesos/store/appc"
> --authenticate_http_readonly="true" --authenticate_http_readwrite="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos" 
> --container_disk_watch_interval="15secs"
> --containerizers="mesos" --credential="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_1_KcpWU7/credential" --default_role="*"
> --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://
> registry-1.docker.io" --docker_remove_delay="6hrs"
> --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
> --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_
> dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_1_KcpWU7/fetch" --fetcher_cache_size="2GB"
> --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
> --hadoop_home="" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --http_command_executor="false"
> --http_credentials="/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_1_KcpWU7/http_credentials" 
> --image_provisioner_backend="copy"
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem"
> --launcher="posix" --launcher_dir="/mesos/mesos-1.2.0/_build/src"
> --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --qos_correction_interval_min="0ns"
> --quiet="false" --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;
> mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true"
> --runtime_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_1_KcpWU7"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_1_JN40xP"
> I1102 07:41:10.017041 31645 credentials.hpp:86] Loading credential for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_1_KcpWU7/credential'
> I1102 07:41:10.017189 31645 slave.cpp:346] Agent using credential for:
> test-principal
> I1102 07:41:10.017218 31645 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/Endpoint_SlaveEndpointTest_
> AuthorizedRequest_1_KcpWU7/http_credentials'
> I1102 07:41:10.017467 31645 http.cpp:887] Using default 'basic' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1102 07:41:10.017617 31645 http.cpp:887] Using default 'basic' HTTP
> authenticator for realm 'mesos-agent-readwrite'
> I1102 07:41:10.018774 31645 slave.cpp:533] Agent resources: cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I1102 07:41:10.018945 31645 slave.cpp:541] Agent attributes: [  ]
> I1102 07:41:10.018975 31645 slave.cpp:546] Agent hostname: 8bd42677ed21
> I1102 07:41:10.021430 31643 state.cpp:57] Recovering state from
> '/tmp/Endpoint_SlaveEndpointTest_AuthorizedRequest_1_JN40xP/meta'
> I1102 07:41:10.021737 31640 status_update_manager.cpp:203] Recovering
> status update manager
> I1102 07:41:10.022092 31636 containerizer.cpp:557] Recovering containerizer
> I1102 07:41:10.023355 31651 provisioner.cpp:253] Provisioner recovery
> complete
> I1102 07:41:10.023722 31647 slave.cpp:5399] Finished recovery
> I1102 07:41:10.052196 31647 slave.cpp:5573] 

Re: Build failed in Jenkins: Mesos » autotools,gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2452

2016-07-06 Thread Benjamin Mahler
Should be fixed now.

On Tue, Jul 5, 2016 at 11:23 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/BUILDTOOL=autotools,COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-6)/2452/changes
> >
>
> Changes:
>
> [anand] Refactored 'Master::Http::getTasks()' into helper function.
>
> [anand] Refactored 'Master::Http::getAgents()' into helper function.
>
> [anand] Refactored 'master::Http::getFrameworks()' to helper function.
>
> [anand] Revised protobuf definition of 'GetState' response.
>
> [anand] Refactored 'Master::Http::getExecutors()' into helper function.
>
> [anand] Implemented 'GetState' call in v1 master API.
>
> [anand] Implemented 'Subscribed' event for v1 master event stream.
>
> --
> [...truncated 10563 lines...]
> [ RUN  ] SystemsTests.Release
> [   OK ] SystemsTests.Release (1 ms)
> [--] 3 tests from SystemsTests (1 ms total)
>
> [--] 6 tests from PathTest
> [ RUN  ] PathTest.Basename
> [   OK ] PathTest.Basename (0 ms)
> [ RUN  ] PathTest.Dirname
> [   OK ] PathTest.Dirname (0 ms)
> [ RUN  ] PathTest.Extension
> [   OK ] PathTest.Extension (0 ms)
> [ RUN  ] PathTest.Join
> [   OK ] PathTest.Join (0 ms)
> [ RUN  ] PathTest.Absolute
> [   OK ] PathTest.Absolute (0 ms)
> [ RUN  ] PathTest.Comparison
> [   OK ] PathTest.Comparison (0 ms)
> [--] 6 tests from PathTest (0 ms total)
>
> [--] 1 test from PathFileTest
> [ RUN  ] PathFileTest.ImplicitConversion
> [   OK ] PathFileTest.ImplicitConversion (0 ms)
> [--] 1 test from PathFileTest (0 ms total)
>
> [--] 10 tests from ProtobufTest
> [ RUN  ] ProtobufTest.JSON
> [   OK ] ProtobufTest.JSON (3 ms)
> [ RUN  ] ProtobufTest.JSONArray
> [   OK ] ProtobufTest.JSONArray (0 ms)
> [ RUN  ] ProtobufTest.JsonLargeIntegers
> [   OK ] ProtobufTest.JsonLargeIntegers (1 ms)
> [ RUN  ] ProtobufTest.SimpleMessageEquals
> [   OK ] ProtobufTest.SimpleMessageEquals (0 ms)
> [ RUN  ] ProtobufTest.ParseJSONArray
> [   OK ] ProtobufTest.ParseJSONArray (0 ms)
> [ RUN  ] ProtobufTest.ParseJSONNull
> [   OK ] ProtobufTest.ParseJSONNull (0 ms)
> [ RUN  ] ProtobufTest.ParseJSONNestedError
> [   OK ] ProtobufTest.ParseJSONNestedError (0 ms)
> [ RUN  ] ProtobufTest.Jsonify
> [   OK ] ProtobufTest.Jsonify (1 ms)
> [ RUN  ] ProtobufTest.JsonifyArray
> [   OK ] ProtobufTest.JsonifyArray (0 ms)
> [ RUN  ] ProtobufTest.JsonifyLargeIntegers
> [   OK ] ProtobufTest.JsonifyLargeIntegers (0 ms)
> [--] 10 tests from ProtobufTest (5 ms total)
>
> [--] 2 tests from RecordIOTest
> [ RUN  ] RecordIOTest.Encoder
> [   OK ] RecordIOTest.Encoder (0 ms)
> [ RUN  ] RecordIOTest.Decoder
> [   OK ] RecordIOTest.Decoder (0 ms)
> [--] 2 tests from RecordIOTest (0 ms total)
>
> [--] 2 tests from ResultTest
> [ RUN  ] ResultTest.TryToResultConversion
> [   OK ] ResultTest.TryToResultConversion (0 ms)
> [ RUN  ] ResultTest.ArrowOperator
> [   OK ] ResultTest.ArrowOperator (0 ms)
> [--] 2 tests from ResultTest (0 ms total)
>
> [--] 1 test from SetTest
> [ RUN  ] SetTest.Set
> [   OK ] SetTest.Set (0 ms)
> [--] 1 test from SetTest (0 ms total)
>
> [--] 1 test from SomeTest
> [ RUN  ] SomeTest.Some
> [   OK ] SomeTest.Some (0 ms)
> [--] 1 test from SomeTest (0 ms total)
>
> [--] 39 tests from StringsTest
> [ RUN  ] StringsTest.Format
> [   OK ] StringsTest.Format (0 ms)
> [ RUN  ] StringsTest.Remove
> [   OK ] StringsTest.Remove (0 ms)
> [ RUN  ] StringsTest.Replace
> [   OK ] StringsTest.Replace (0 ms)
> [ RUN  ] StringsTest.Trim
> [   OK ] StringsTest.Trim (0 ms)
> [ RUN  ] StringsTest.Tokenize
> [   OK ] StringsTest.Tokenize (0 ms)
> [ RUN  ] StringsTest.TokenizeStringWithDelimsAtStart
> [   OK ] StringsTest.TokenizeStringWithDelimsAtStart (0 ms)
> [ RUN  ] StringsTest.TokenizeStringWithDelimsAtEnd
> [   OK ] StringsTest.TokenizeStringWithDelimsAtEnd (0 ms)
> [ RUN  ] StringsTest.TokenizeStringWithDelimsAtStartAndEnd
> [   OK ] StringsTest.TokenizeStringWithDelimsAtStartAndEnd (0 ms)
> [ RUN  ] StringsTest.TokenizeWithMultipleDelims
> [   OK ] StringsTest.TokenizeWithMultipleDelims (0 ms)
> [ RUN  ] StringsTest.TokenizeEmptyString
> [   OK ] StringsTest.TokenizeEmptyString (0 ms)
> [ RUN  ] StringsTest.TokenizeDelimOnlyString
> [   OK ] StringsTest.TokenizeDelimOnlyString (0 ms)
> [ RUN  ] StringsTest.TokenizeNullByteDelim
> [   OK ] StringsTest.TokenizeNullByteDelim (0 ms)
> [ RUN  ] StringsTest.TokenizeNZero
> [   OK ] StringsTest.TokenizeNZero (0 ms)
> [ RUN 

Re: Build failed in Jenkins: Mesos » cmake,clang,--verbose,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2422

2016-07-02 Thread Benjamin Mahler
Should be fixed.

On Sat, Jul 2, 2016 at 5:42 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/BUILDTOOL=cmake,COMPILER=clang,CONFIGURATION=--verbose,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)&&(!ubuntu-6)/2422/changes
> >
>
> Changes:
>
> [bmahler] Added a new 'NvidiaVolume' component.
>
> [bmahler] Integrated the 'NvidiaVolume' component into 'NvidiaComponents'.
>
> [bmahler] Fixed an incorrect check in stout/elf.hpp.
>
> [bmahler] Cleaned up some logic in stout/elf.hpp.
>
> --
> [...truncated 9724 lines...]
> [ 92%] Building CXX object
> src/CMakeFiles/mesos-1.0.0.dir/linux/systemd.cpp.o
> cd /mesos/build/src && /usr/bin/clang++-3.5   -DBUILD_DATE="\"2016-3-3
> 10:20\"" -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\"
> -DBUILD_TIME=\"100\" -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1
> -DLIBDIR=\"/usr/local/libmesos\" -DPICOJSON_USE_INT64
> -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.0.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> -I/mesos/src -I/mesos/3rdparty/stout/include -I/usr/include/apr-1.0
> -I/mesos/build/3rdparty/boost-1.53.0/src/boost-1.53.0
> -I/mesos/build/3rdparty/elfio-3.1/src/elfio-3.1
> -I/mesos/build/3rdparty/glog-0.3.3/src/glog-0.3.3-lib/lib/include
> -I/mesos/build/3rdparty/nvml-352.79/src/nvml-352.79
> -I/mesos/build/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/include
> -I/usr/include/subversion-1 -I/mesos/src/src
> -I/mesos/3rdparty/libprocess/include
> -I/mesos/build/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2
> -I/mesos/build/3rdparty/libev-4.22/src/libev-4.22
> -I/mesos/build/3rdparty/zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/include
> -I/mesos/build/3rdparty/zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/generated
> -I/mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4/include-o
> CMakeFiles/mesos-1.0.0.dir/linux/systemd.cpp.o -c
> /mesos/src/linux/systemd.cpp
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 92%] Building CXX object
> src/CMakeFiles/mesos-1.0.0.dir/slave/containerizer/mesos/linux_launcher.cpp.o
> cd /mesos/build/src && /usr/bin/clang++-3.5   -DBUILD_DATE="\"2016-3-3
> 10:20\"" -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\"
> -DBUILD_TIME=\"100\" -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1
> -DLIBDIR=\"/usr/local/libmesos\" -DPICOJSON_USE_INT64
> -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.0.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> -I/mesos/src -I/mesos/3rdparty/stout/include -I/usr/include/apr-1.0
> -I/mesos/build/3rdparty/boost-1.53.0/src/boost-1.53.0
> -I/mesos/build/3rdparty/elfio-3.1/src/elfio-3.1
> -I/mesos/build/3rdparty/glog-0.3.3/src/glog-0.3.3-lib/lib/include
> -I/mesos/build/3rdparty/nvml-352.79/src/nvml-352.79
> -I/mesos/build/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> -I/mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/include
> -I/usr/include/subversion-1 -I/mesos/src/src
> -I/mesos/3rdparty/libprocess/include
> -I/mesos/build/3rdparty/http_parser-2.6.2/src/http_parser-2.6.2
> -I/mesos/build/3rdparty/libev-4.22/src/libev-4.22
> -I/mesos/build/3rdparty/zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/include
> -I/mesos/build/3rdparty/zookeeper-3.4.8/src/zookeeper-3.4.8/src/c/generated
> -I/mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4/include-o
> CMakeFiles/mesos-1.0.0.dir/slave/containerizer/mesos/linux_launcher.cpp.o
> -c /mesos/src/slave/containerizer/mesos/linux_launcher.cpp
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 64
> [ 93%] Building CXX object
> src/CMakeFiles/mesos-1.0.0.dir/slave/containerizer/mesos/isolators/cgroups/cpushare.cpp.o
> cd /mesos/build/src && /usr/bin/clang++-3.5   -DBUILD_DATE="\"2016-3-3
> 10:20\"" -DBUILD_FLAGS=\"\" -DBUILD_JAVA_JVM_LIBRARY=\"\"
> -DBUILD_TIME=\"100\" -DBUILD_USER=\"frank\" -DHAS_AUTHENTICATION=1
> -DLIBDIR=\"/usr/local/libmesos\" -DPICOJSON_USE_INT64
> -DPKGDATADIR=\"/usr/local/share/mesos\"
> -DPKGLIBEXECDIR=\"/usr/local/libexec/mesos\" -DUSE_STATIC_LIB
> -DVERSION=\"1.0.0\" -D__STDC_FORMAT_MACROS -std=c++11 -g -I/mesos/include
> -I/mesos/build/include -I/mesos/build/include/mesos -I/mesos/build/src
> -I/mesos/src -I/mesos/3rdparty/stout/include -I/usr/include/apr-1.0
> -I/mesos/build/3rdparty/boost-1.53.0/src/boost-1.53.0
> -I/mesos/build/3rdparty/elfio-3.1/src/elfio-3.1
> -I/mesos/build/3rdparty/glog-0.3.3/src/glog-0.3.3-lib/lib/include
> -I/mesos/build/3rdparty/nvml-352.79/src/nvml-352.79
> -I/mesos/build/3rdparty/picojson-1.3.0/src/picojson-1.3.0
> 

Re: Build failed in Jenkins: Mesos » cmake,gcc,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,centos:7,(docker||Hadoop)&&(!ubuntu-us1) #2306

2016-06-17 Thread Benjamin Mahler
+Kevin

On Fri, Jun 17, 2016 at 12:29 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/BUILDTOOL=cmake,COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=centos%3A7,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)/2306/
> >
>
> --
> [...truncated 8441 lines...]
> cd /mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4-build &&
> /usr/bin/cmake -E echo
>
> cd /mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4-build &&
> /usr/bin/cmake -E touch
> /mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4-stamp/leveldb-1.4-install
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 10
> [ 41%] Completed 'leveldb-1.4'
> cd /mesos/build/3rdparty && /usr/bin/cmake -E make_directory
> /mesos/build/3rdparty/CMakeFiles
> cd /mesos/build/3rdparty && /usr/bin/cmake -E touch
> /mesos/build/3rdparty/CMakeFiles/leveldb-1.4-complete
> cd /mesos/build/3rdparty && /usr/bin/cmake -E touch
> /mesos/build/3rdparty/leveldb-1.4/src/leveldb-1.4-stamp/leveldb-1.4-done
> make[2]: Leaving directory `/mesos/build'
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles  10 11
> [ 41%] Built target leveldb-1.4
> make -f src/CMakeFiles/mesos-1.0.0.dir/build.make
> src/CMakeFiles/mesos-1.0.0.dir/depend
> make[2]: Entering directory `/mesos/build'
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 14
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 15
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 41%] [ 41%] [ 42%] [ 42%] [ 42%] [ 42%] [ 43%] Generating
> ../include/mesos/v1/scheduler/scheduler.pb.cc,
> ../include/mesos/v1/scheduler/scheduler.pb.h
> [ 43%]
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/v1/scheduler/scheduler.proto
> Generating ../include/mesos/authentication/authentication.pb.cc,
> ../include/mesos/authentication/authentication.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/authentication/authentication.proto
> Generating ../include/mesos/authorizer/acls.pb.cc,
> ../include/mesos/authorizer/acls.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/authorizer/acls.proto
> Generating ../include/mesos/appc/spec.pb.cc,
> ../include/mesos/appc/spec.pb.h
> Generating ../include/mesos/agent/agent.pb.cc,
> ../include/mesos/agent/agent.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/appc/spec.proto
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/agent/agent.proto
> Generating ../include/mesos/containerizer/containerizer.pb.cc,
> ../include/mesos/containerizer/containerizer.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/containerizer/containerizer.proto
> Generating ../include/mesos/authorizer/authorizer.pb.cc,
> ../include/mesos/authorizer/authorizer.pb.h
> Generating ../include/mesos/master/allocator.pb.cc,
> ../include/mesos/master/allocator.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/authorizer/authorizer.proto
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include -I/mesos/src --cpp_out=/mesos/build/include
> /mesos/include/mesos/master/allocator.proto
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 16
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles
> [ 43%] [ 43%] /usr/bin/cmake -E cmake_progress_report
> /mesos/build/CMakeFiles
> [ 44%] [ 44%] /usr/bin/cmake -E cmake_progress_report
> /mesos/build/CMakeFiles
> /usr/bin/cmake -E cmake_progress_report /mesos/build/CMakeFiles 17
> Generating ../include/mesos/docker/spec.pb.cc,
> ../include/mesos/docker/spec.pb.h
> /mesos/build/3rdparty/protobuf-2.6.1/src/protobuf-2.6.1-lib/lib/bin/protoc
> -I/mesos/include 

Re: Build failed in Jenkins: Mesos » clang,--verbose,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1) #1888

2016-03-29 Thread Benjamin Mahler
Thanks for fixing!

On Tue, Mar 29, 2016 at 11:36 AM, Joseph Wu <jos...@mesosphere.io> wrote:

> Tracked here: https://issues.apache.org/jira/browse/MESOS-4961
>
> On Tue, Mar 29, 2016 at 11:22 AM, Benjamin Mahler <bmah...@apache.org>
> wrote:
>
> > +joseph
> >
> > On Mon, Mar 28, 2016 at 11:32 PM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> >> See <
> >>
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)/1888/changes
> >> >
> >>
> >> Changes:
> >>
> >> [yujie.jay] Adapted port_mapping isolator with missing subprocess
> >> parameter.
> >>
> >> [yujie.jay] Fixed typo in subprocess doxygen comments.
> >>
> >> --
> >> [...truncated 178901 lines...]
> >> I0329 06:24:26.085312 32305 master.cpp:378] Flags at startup: --acls=""
> >> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> >> --authenticate="false" --authenticate_http="true"
> >> --authenticate_slaves="true" --authenticators="crammd5"
> >> --authorizers="local" --credentials="/tmp/h70a8A/credentials"
> >> --framework_sorter="drf" --help="false" --hostname_lookup="true"
> >> --http_authenticators="basic" --initialize_driver_logging="true"
> >> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> >> --max_completed_frameworks="50"
> --max_completed_tasks_per_framework="1000"
> >> --max_slave_ping_timeouts="5" --quiet="false"
> >> --recovery_slave_removal_limit="100%" --registry="replicated_log"
> >> --registry_fetch_timeout="1mins" --registry_store_timeout="100secs"
> >> --registry_strict="true" --root_submissions="true"
> >> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> >> --user_sorter="drf" --version="false"
> >> --webui_dir="/mesos/mesos-0.29.0/_inst/share/mesos/webui"
> >> --work_dir="/tmp/h70a8A/master" --zk_session_timeout="10secs"
> >> I0329 06:24:26.085846 32305 master.cpp:429] Master allowing
> >> unauthenticated frameworks to register
> >> I0329 06:24:26.085932 32305 master.cpp:432] Master only allowing
> >> authenticated slaves to register
> >> I0329 06:24:26.086014 32305 credentials.hpp:37] Loading credentials for
> >> authentication from '/tmp/h70a8A/credentials'
> >> I0329 06:24:26.089220 32305 master.cpp:474] Using default 'crammd5'
> >> authenticator
> >> I0329 06:24:26.089391 32305 master.cpp:545] Using default 'basic' HTTP
> >> authenticator
> >> I0329 06:24:26.089530 32305 master.cpp:583] Authorization enabled
> >> I0329 06:24:26.092001 32305 hierarchical.cpp:144] Initialized
> >> hierarchical allocator process
> >> I0329 06:24:26.092093 32305 whitelist_watcher.cpp:77] No whitelist given
> >> I0329 06:24:26.093214 32310 master.cpp:1826] The newly elected leader is
> >> master@172.17.0.2:44577 with id c0df3402-8c18-4794-aea1-d0fc2093455d
> >> I0329 06:24:26.093361 32310 master.cpp:1839] Elected as the leading
> >> master!
> >> I0329 06:24:26.093479 32310 master.cpp:1526] Recovering from registrar
> >> I0329 06:24:26.093793 32302 registrar.cpp:307] Recovering registrar
> >> I0329 06:24:26.097321 32299 leveldb.cpp:304] Persisting metadata (8
> >> bytes) to leveldb took 28.543546ms
> >> I0329 06:24:26.097383 32299 replica.cpp:320] Persisted replica status to
> >> STARTING
> >> I0329 06:24:26.097652 32299 recover.cpp:473] Replica is in STARTING
> status
> >> I0329 06:24:26.099536 32300 replica.cpp:673] Replica in STARTING status
> >> received a broadcasted recover request from (16412)@172.17.0.2:44577
> >> I0329 06:24:26.100143 32300 recover.cpp:193] Received a recover response
> >> from a replica in STARTING status
> >> I0329 06:24:26.100862 32300 recover.cpp:564] Updating replica status to
> >> VOTING
> >> I0329 06:24:26.131098 32300 leveldb.cpp:304] Persisting metadata (8
> >> bytes) to leveldb took 30.047383ms
> >> I0329 06:24:26.131202 32300 replica.cpp:320] Persisted replica status to
> >> VOTING
> 

Re: Build failed in Jenkins: Mesos » clang,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1) #1893

2016-03-29 Thread Benjamin Mahler
Joris, Vinod, has a regression been introduced in the SSL tests?

On Tue, Mar 29, 2016 at 11:21 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)/1893/changes
> >
>
> Changes:
>
> [vinodkone] Changed name of http-parser enum to 'flags_enum'.
>
> [yujie.jay] Implemented isolate() method of "network/cni" isolator.
>
> --
> [...truncated 19136 lines...]
> [   OK ] ProcessTest.FirewallUninstall (10 ms)
> [--] 30 tests from ProcessTest (91 ms total)
>
> [--] 3 tests from QueueTest
> [ RUN  ] QueueTest.Block
> [   OK ] QueueTest.Block (0 ms)
> [ RUN  ] QueueTest.Noblock
> [   OK ] QueueTest.Noblock (0 ms)
> [ RUN  ] QueueTest.Queue
> [   OK ] QueueTest.Queue (0 ms)
> [--] 3 tests from QueueTest (0 ms total)
>
> [--] 3 tests from ReapTest
> [ RUN  ] ReapTest.NonChildProcess
> [   OK ] ReapTest.NonChildProcess (43 ms)
> [ RUN  ] ReapTest.ChildProcess
> [   OK ] ReapTest.ChildProcess (31 ms)
> [ RUN  ] ReapTest.TerminatedChildProcess
> [   OK ] ReapTest.TerminatedChildProcess (24 ms)
> [--] 3 tests from ReapTest (99 ms total)
>
> [--] 4 tests from SequenceTest
> [ RUN  ] SequenceTest.Serialize
> [   OK ] SequenceTest.Serialize (11 ms)
> [ RUN  ] SequenceTest.DiscardOne
> [   OK ] SequenceTest.DiscardOne (11 ms)
> [ RUN  ] SequenceTest.DiscardAll
> [   OK ] SequenceTest.DiscardAll (12 ms)
> [ RUN  ] SequenceTest.Random
> [   OK ] SequenceTest.Random (11 ms)
> [--] 4 tests from SequenceTest (46 ms total)
>
> [--] 4 tests from SharedTest
> [ RUN  ] SharedTest.ConstAccess
> [   OK ] SharedTest.ConstAccess (0 ms)
> [ RUN  ] SharedTest.Null
> [   OK ] SharedTest.Null (0 ms)
> [ RUN  ] SharedTest.Reset
> [   OK ] SharedTest.Reset (0 ms)
> [ RUN  ] SharedTest.Own
> [   OK ] SharedTest.Own (0 ms)
> [--] 4 tests from SharedTest (0 ms total)
>
> [--] 3 tests from StatisticsTest
> [ RUN  ] StatisticsTest.Empty
> [   OK ] StatisticsTest.Empty (0 ms)
> [ RUN  ] StatisticsTest.Single
> [   OK ] StatisticsTest.Single (0 ms)
> [ RUN  ] StatisticsTest.Statistics
> [   OK ] StatisticsTest.Statistics (0 ms)
> [--] 3 tests from StatisticsTest (0 ms total)
>
> [--] 14 tests from SubprocessTest
> [ RUN  ] SubprocessTest.Status
> [   OK ] SubprocessTest.Status (86 ms)
> [ RUN  ] SubprocessTest.PipeOutput
> [   OK ] SubprocessTest.PipeOutput (28 ms)
> [ RUN  ] SubprocessTest.PipeInput
> [   OK ] SubprocessTest.PipeInput (14 ms)
> [ RUN  ] SubprocessTest.PipeRedirect
> [   OK ] SubprocessTest.PipeRedirect (14 ms)
> [ RUN  ] SubprocessTest.PathOutput
> [   OK ] SubprocessTest.PathOutput (45 ms)
> [ RUN  ] SubprocessTest.PathInput
> [   OK ] SubprocessTest.PathInput (17 ms)
> [ RUN  ] SubprocessTest.FdOutput
> [   OK ] SubprocessTest.FdOutput (45 ms)
> [ RUN  ] SubprocessTest.FdInput
> [   OK ] SubprocessTest.FdInput (15 ms)
> [ RUN  ] SubprocessTest.Default
> hello world
> [   OK ] SubprocessTest.Default (24 ms)
> [ RUN  ] SubprocessTest.Flags
> [   OK ] SubprocessTest.Flags (23 ms)
> [ RUN  ] SubprocessTest.Environment
> [   OK ] SubprocessTest.Environment (31 ms)
> [ RUN  ] SubprocessTest.EnvironmentWithSpaces
> [   OK ] SubprocessTest.EnvironmentWithSpaces (14 ms)
> [ RUN  ] SubprocessTest.EnvironmentWithSpacesAndQuotes
> [   OK ] SubprocessTest.EnvironmentWithSpacesAndQuotes (15 ms)
> [ RUN  ] SubprocessTest.EnvironmentOverride
> [   OK ] SubprocessTest.EnvironmentOverride (17 ms)
> [--] 14 tests from SubprocessTest (389 ms total)
>
> [--] 3 tests from TimeSeriesTest
> [ RUN  ] TimeSeriesTest.Set
> [   OK ] TimeSeriesTest.Set (0 ms)
> [ RUN  ] TimeSeriesTest.Sparsify
> [   OK ] TimeSeriesTest.Sparsify (0 ms)
> [ RUN  ] TimeSeriesTest.Truncate
> [   OK ] TimeSeriesTest.Truncate (1 ms)
> [--] 3 tests from TimeSeriesTest (1 ms total)
>
> [--] 5 tests from TimeTest
> [ RUN  ] TimeTest.Arithmetic
> [   OK ] TimeTest.Arithmetic (0 ms)
> [ RUN  ] TimeTest.Now
> [   OK ] TimeTest.Now (0 ms)
> [ RUN  ] TimeTest.RFC1123Output
> [   OK ] TimeTest.RFC1123Output (0 ms)
> [ RUN  ] TimeTest.RFC3339Output
> [   OK ] TimeTest.RFC3339Output (0 ms)
> [ RUN  ] TimeTest.Output
> [   OK ] TimeTest.Output (0 ms)
> [--] 5 tests from TimeTest (0 ms total)
>
> [--] 1 test from SSL
> [ RUN  ] SSL.Disabled
> [   OK ] SSL.Disabled (1 ms)
> [--] 1 test from SSL (1 ms total)
>
> [--] 16 tests from SSLTest
> [ RUN  ] 

Re: Build failed in Jenkins: Mesos » clang,--verbose,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1) #1888

2016-03-29 Thread Benjamin Mahler
+joseph

On Mon, Mar 28, 2016 at 11:32 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,ENVIRONMENT=GLOG_v=1%20MESOS_VERBOSE=1,OS=ubuntu%3A14.04,label_exp=(docker%7C%7CHadoop)&&(!ubuntu-us1)/1888/changes
> >
>
> Changes:
>
> [yujie.jay] Adapted port_mapping isolator with missing subprocess
> parameter.
>
> [yujie.jay] Fixed typo in subprocess doxygen comments.
>
> --
> [...truncated 178901 lines...]
> I0329 06:24:26.085312 32305 master.cpp:378] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_http="true"
> --authenticate_slaves="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/h70a8A/credentials"
> --framework_sorter="drf" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --max_slave_ping_timeouts="5" --quiet="false"
> --recovery_slave_removal_limit="100%" --registry="replicated_log"
> --registry_fetch_timeout="1mins" --registry_store_timeout="100secs"
> --registry_strict="true" --root_submissions="true"
> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> --user_sorter="drf" --version="false"
> --webui_dir="/mesos/mesos-0.29.0/_inst/share/mesos/webui"
> --work_dir="/tmp/h70a8A/master" --zk_session_timeout="10secs"
> I0329 06:24:26.085846 32305 master.cpp:429] Master allowing
> unauthenticated frameworks to register
> I0329 06:24:26.085932 32305 master.cpp:432] Master only allowing
> authenticated slaves to register
> I0329 06:24:26.086014 32305 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/h70a8A/credentials'
> I0329 06:24:26.089220 32305 master.cpp:474] Using default 'crammd5'
> authenticator
> I0329 06:24:26.089391 32305 master.cpp:545] Using default 'basic' HTTP
> authenticator
> I0329 06:24:26.089530 32305 master.cpp:583] Authorization enabled
> I0329 06:24:26.092001 32305 hierarchical.cpp:144] Initialized hierarchical
> allocator process
> I0329 06:24:26.092093 32305 whitelist_watcher.cpp:77] No whitelist given
> I0329 06:24:26.093214 32310 master.cpp:1826] The newly elected leader is
> master@172.17.0.2:44577 with id c0df3402-8c18-4794-aea1-d0fc2093455d
> I0329 06:24:26.093361 32310 master.cpp:1839] Elected as the leading master!
> I0329 06:24:26.093479 32310 master.cpp:1526] Recovering from registrar
> I0329 06:24:26.093793 32302 registrar.cpp:307] Recovering registrar
> I0329 06:24:26.097321 32299 leveldb.cpp:304] Persisting metadata (8 bytes)
> to leveldb took 28.543546ms
> I0329 06:24:26.097383 32299 replica.cpp:320] Persisted replica status to
> STARTING
> I0329 06:24:26.097652 32299 recover.cpp:473] Replica is in STARTING status
> I0329 06:24:26.099536 32300 replica.cpp:673] Replica in STARTING status
> received a broadcasted recover request from (16412)@172.17.0.2:44577
> I0329 06:24:26.100143 32300 recover.cpp:193] Received a recover response
> from a replica in STARTING status
> I0329 06:24:26.100862 32300 recover.cpp:564] Updating replica status to
> VOTING
> I0329 06:24:26.131098 32300 leveldb.cpp:304] Persisting metadata (8 bytes)
> to leveldb took 30.047383ms
> I0329 06:24:26.131202 32300 replica.cpp:320] Persisted replica status to
> VOTING
> I0329 06:24:26.131471 32300 recover.cpp:578] Successfully joined the Paxos
> group
> I0329 06:24:26.131674 32300 recover.cpp:462] Recover process terminated
> I0329 06:24:26.132522 32300 log.cpp:659] Attempting to start the writer
> I0329 06:24:26.134305 32300 replica.cpp:493] Replica received implicit
> promise request from (16413)@172.17.0.2:44577 with proposal 1
> I0329 06:24:26.164098 32300 leveldb.cpp:304] Persisting metadata (8 bytes)
> to leveldb took 29.777838ms
> I0329 06:24:26.164202 32300 replica.cpp:342] Persisted promised to 1
> I0329 06:24:26.174093 32307 coordinator.cpp:238] Coordinator attempting to
> fill missing positions
> I0329 06:24:26.177204 32307 replica.cpp:388] Replica received explicit
> promise request from (16414)@172.17.0.2:44577 for position 0 with
> proposal 2
> I0329 06:24:26.202256 32307 leveldb.cpp:341] Persisting action (8 bytes)
> to leveldb took 24.828611ms
> I0329 06:24:26.202365 32307 replica.cpp:712] Persisted action at 0
> I0329 06:24:26.204294 32307 replica.cpp:537] Replica received write
> request for position 0 from (16415)@172.17.0.2:44577
> I0329 06:24:26.204396 32307 leveldb.cpp:436] Reading position from leveldb
> took 63731ns
> I0329 06:24:26.227409 32307 leveldb.cpp:341] Persisting action (14 bytes)
> to leveldb took 22.997586ms
> I0329 06:24:26.227517 32307 replica.cpp:712] Persisted action at 0
> I0329 06:24:26.233665 32299 replica.cpp:691] Replica received learned
> notice for position 0 from @0.0.0.0:0
> 

Re: Build failed in Jenkins: Mesos » gcc,--verbose,ubuntu:14.04,docker||Hadoop #1544

2016-01-25 Thread Benjamin Mahler
Hey Tim, are you planning to work on this? Or should we re-assign it?

On Mon, Jan 25, 2016 at 8:54 AM, Greg Mann  wrote:

> We have a ticket tracking this failure here:
> https://issues.apache.org/jira/browse/MESOS-1802
>
> On Mon, Jan 25, 2016 at 7:32 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
> > See <
> >
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1544/changes
> > >
> >
> > Changes:
> >
> > [toenshoff] Improved rakefile to allow for external .md file links.
> >
> > [toenshoff] Made links to .md files consistent across documentation.
> >
> > --
> > [...truncated 164628 lines...]
> > I0125 15:31:53.907892 31995 gc.cpp:54] Scheduling
> >
> '/tmp/ContentType_SchedulerTest_Message_1_FH51HS/slaves/6e16d26e-80dd-4b46-b8b5-1f7aa9133d80-S0/frameworks/6e16d26e-80dd-4b46-b8b5-1f7aa9133d80-/executors/default'
> > for gc 6.895029037days in the future
> > I0125 15:31:53.908051 31995 gc.cpp:54] Scheduling
> >
> '/tmp/ContentType_SchedulerTest_Message_1_FH51HS/slaves/6e16d26e-80dd-4b46-b8b5-1f7aa9133d80-S0/frameworks/6e16d26e-80dd-4b46-b8b5-1f7aa9133d80-'
> > for gc 6.8950096296days in the future
> > [   OK ] ContentType/SchedulerTest.Message/1 (795 ms)
> > [ RUN  ] ContentType/SchedulerTest.Request/0
> > I0125 15:31:54.011689 31962 leveldb.cpp:174] Opened db in 95.027917ms
> > I0125 15:31:54.048696 31962 leveldb.cpp:181] Compacted db in 36.838022ms
> > I0125 15:31:54.049106 31962 leveldb.cpp:196] Created db iterator in
> 30240ns
> > I0125 15:31:54.049321 31962 leveldb.cpp:202] Seeked to beginning of db in
> > 8041ns
> > I0125 15:31:54.049427 31962 leveldb.cpp:271] Iterated through 0 keys in
> > the db in 480ns
> > I0125 15:31:54.049568 31962 replica.cpp:779] Replica recovered with log
> > positions 0 -> 0 with 1 holes and 0 unlearned
> > I0125 15:31:54.050299 31993 recover.cpp:447] Starting replica recovery
> > I0125 15:31:54.050555 31993 recover.cpp:473] Replica is in EMPTY status
> > I0125 15:31:54.052047 31983 replica.cpp:673] Replica in EMPTY status
> > received a broadcasted recover request from (14154)@172.17.0.2:52355
> > I0125 15:31:54.052635 31982 recover.cpp:193] Received a recover response
> > from a replica in EMPTY status
> > I0125 15:31:54.053282 31985 master.cpp:374] Master
> > a41bf879-36b6-4769-82c6-4220f6ecc603 (6a3fa6c4588a) started on
> > 172.17.0.2:52355
> > I0125 15:31:54.053308 31985 master.cpp:376] Flags at startup: --acls=""
> > --allocation_interval="1secs" --allocator="HierarchicalDRF"
> > --authenticate="false" --authenticate_http="true"
> > --authenticate_slaves="true" --authenticators="crammd5"
> > --authorizers="local" --credentials="/tmp/60pgVG/credentials"
> > --framework_sorter="drf" --help="false" --hostname_lookup="true"
> > --http_authenticators="basic" --initialize_driver_logging="true"
> > --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> > --max_completed_frameworks="50"
> --max_completed_tasks_per_framework="1000"
> > --max_slave_ping_timeouts="5" --quiet="false"
> > --recovery_slave_removal_limit="100%" --registry="replicated_log"
> > --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> > --registry_strict="true" --root_submissions="true"
> > --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> > --user_sorter="drf" --version="false"
> > --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> > --work_dir="/tmp/60pgVG/master" --zk_session_timeout="10secs"
> > I0125 15:31:54.053663 31985 master.cpp:423] Master allowing
> > unauthenticated frameworks to register
> > I0125 15:31:54.053680 31985 master.cpp:426] Master only allowing
> > authenticated slaves to register
> > I0125 15:31:54.053691 31985 credentials.hpp:35] Loading credentials for
> > authentication from '/tmp/60pgVG/credentials'
> > I0125 15:31:54.054059 31985 master.cpp:466] Using default 'crammd5'
> > authenticator
> > I0125 15:31:54.054276 31985 master.cpp:535] Using default 'basic' HTTP
> > authenticator
> > I0125 15:31:54.054428 31985 master.cpp:569] Authorization enabled
> > I0125 15:31:54.055181 31990 whitelist_watcher.cpp:77] No whitelist given
> > I0125 15:31:54.056215 31983 recover.cpp:564] Updating replica status to
> > STARTING
> > I0125 15:31:54.056524 31984 hierarchical.cpp:144] Initialized
> hierarchical
> > allocator process
> > I0125 15:31:54.059185 31983 master.cpp:1710] The newly elected leader is
> > master@172.17.0.2:52355 with id a41bf879-36b6-4769-82c6-4220f6ecc603
> > I0125 15:31:54.059321 31983 master.cpp:1723] Elected as the leading
> master!
> > I0125 15:31:54.059443 31983 master.cpp:1468] Recovering from registrar
> > I0125 15:31:54.059737 31981 registrar.cpp:307] Recovering registrar
> > I0125 15:31:54.087240 31987 leveldb.cpp:304] Persisting metadata (8
> bytes)
> > to leveldb took 30.864305ms
> > I0125 15:31:54.087321 31987 replica.cpp:320] 

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1506

2016-01-22 Thread Benjamin Mahler
It will be fixed soon.

On Tue, Jan 19, 2016 at 4:34 PM, Greg Mann  wrote:

> Looks like it could be caused by the same issue as
> https://issues.apache.org/jira/browse/MESOS-4409
>
> On Tue, Jan 19, 2016 at 4:07 PM, Vinod Kone  wrote:
>
> > Is this being looked at?
> >
> > On Mon, Jan 18, 2016 at 3:03 PM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> > > See <
> > >
> >
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1506/
> > > >
> > >
> > > --
> > > [...truncated 155770 lines...]
> > > I0118 23:03:19.844106 29929 gc.cpp:54] Scheduling
> > >
> >
> '/tmp/ContentType_SchedulerTest_Message_1_7OegKY/slaves/05552c6c-3b19-4d76-9402-ef9299661a04-S0/frameworks/05552c6c-3b19-4d76-9402-ef9299661a04-'
> > > for gc 6.9023125037days in the future
> > > I0118 23:03:19.844135 29928 status_update_manager.cpp:528] Cleaning up
> > > status update stream for task a7ef87eb-1120-4799-98e0-6c12cb4fae8a of
> > > framework 05552c6c-3b19-4d76-9402-ef9299661a04-
> > > [   OK ] ContentType/SchedulerTest.Message/1 (133 ms)
> > > [ RUN  ] ContentType/SchedulerTest.Request/0
> > > I0118 23:03:19.851078 29898 leveldb.cpp:174] Opened db in 2.742478ms
> > > I0118 23:03:19.852309 29898 leveldb.cpp:181] Compacted db in 1.218396ms
> > > I0118 23:03:19.852401 29898 leveldb.cpp:196] Created db iterator in
> > 33639ns
> > > I0118 23:03:19.852421 29898 leveldb.cpp:202] Seeked to beginning of db
> in
> > > 7363ns
> > > I0118 23:03:19.852429 29898 leveldb.cpp:271] Iterated through 0 keys in
> > > the db in 5034ns
> > > I0118 23:03:19.852474 29898 replica.cpp:779] Replica recovered with log
> > > positions 0 -> 0 with 1 holes and 0 unlearned
> > > I0118 23:03:19.853023 29930 recover.cpp:447] Starting replica recovery
> > > I0118 23:03:19.854782 29930 recover.cpp:473] Replica is in EMPTY status
> > > I0118 23:03:19.855705 29925 master.cpp:374] Master
> > > 624efc91-f4ce-4a4a-a449-00ce1f03b7b9 (3ade37c447fa) started on
> > > 172.17.0.4:45683
> > > I0118 23:03:19.855744 29925 master.cpp:376] Flags at startup: --acls=""
> > > --allocation_interval="1secs" --allocator="HierarchicalDRF"
> > > --authenticate="false" --authenticate_http="true"
> > > --authenticate_slaves="true" --authenticators="crammd5"
> > > --authorizers="local" --credentials="/tmp/jzBYpt/credentials"
> > > --framework_sorter="drf" --help="false" --hostname_lookup="true"
> > > --http_authenticators="basic" --initialize_driver_logging="true"
> > > --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> > > --max_completed_frameworks="50"
> > --max_completed_tasks_per_framework="1000"
> > > --max_slave_ping_timeouts="5" --quiet="false"
> > > --recovery_slave_removal_limit="100%" --registry="replicated_log"
> > > --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> > > --registry_strict="true" --root_submissions="true"
> > > --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> > > --user_sorter="drf" --version="false"
> > > --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> > > --work_dir="/tmp/jzBYpt/master" --zk_session_timeout="10secs"
> > > I0118 23:03:19.856138 29925 master.cpp:423] Master allowing
> > > unauthenticated frameworks to register
> > > I0118 23:03:19.856154 29925 master.cpp:426] Master only allowing
> > > authenticated slaves to register
> > > I0118 23:03:19.856164 29925 credentials.hpp:35] Loading credentials for
> > > authentication from '/tmp/jzBYpt/credentials'
> > > I0118 23:03:19.856315 29916 replica.cpp:673] Replica in EMPTY status
> > > received a broadcasted recover request from (13683)@172.17.0.4:45683
> > > I0118 23:03:19.856478 29925 master.cpp:466] Using default 'crammd5'
> > > authenticator
> > > I0118 23:03:19.856631 29925 master.cpp:535] Using default 'basic' HTTP
> > > authenticator
> > > I0118 23:03:19.856799 29925 master.cpp:569] Authorization enabled
> > > I0118 23:03:19.857043 29931 hierarchical.cpp:145] Initialized
> > hierarchical
> > > allocator process
> > > I0118 23:03:19.857069 29918 whitelist_watcher.cpp:77] No whitelist
> given
> > > I0118 23:03:19.857074 29917 recover.cpp:193] Received a recover
> response
> > > from a replica in EMPTY status
> > > I0118 23:03:19.857702 29924 recover.cpp:564] Updating replica status to
> > > STARTING
> > > I0118 23:03:19.858502 29917 leveldb.cpp:304] Persisting metadata (8
> > bytes)
> > > to leveldb took 678031ns
> > > I0118 23:03:19.858530 29917 replica.cpp:320] Persisted replica status
> to
> > > STARTING
> > > I0118 23:03:19.858784 29924 recover.cpp:473] Replica is in STARTING
> > status
> > > I0118 23:03:19.859259 29921 master.cpp:1710] The newly elected leader
> is
> > > master@172.17.0.4:45683 with id 624efc91-f4ce-4a4a-a449-00ce1f03b7b9
> > > I0118 23:03:19.859293 29921 master.cpp:1723] Elected as the leading
> > master!
> > > I0118 23:03:19.859303 

Re: Build failed in Jenkins: mesos-reviewbot #10623

2016-01-08 Thread Benjamin Mahler
Logging the body of the 500 (or other errors) might shed some light,
instead of just the status:
https://github.com/apache/mesos/commit/3b4d0d1eaf2f0277b45a182c33a71c321db2cf8f

On Wed, Jan 6, 2016 at 4:38 PM, Vinod Kone  wrote:

> aha. i missed the "internal server error" part. this is the 2nd review
> where i have seen the internal server error. i wonder if it's due to the
> size of the message being posted to RB.
>
> On Wed, Jan 6, 2016 at 3:35 PM, Joris Van Remoortere 
> wrote:
>
> > @Vinod
> > This is a review, not a commit chain build.
> >
> > —
> > *Joris Van Remoortere*
> > Mesosphere
> >
> > On Wed, Jan 6, 2016 at 3:06 PM, Vinod Kone  wrote:
> >
> > > has anyone looked into this?
> > >
> > > On Wed, Jan 6, 2016 at 1:43 PM, Apache Jenkins Server <
> > > jenk...@builds.apache.org> wrote:
> > >
> > > > See 
> > > >
> > > > --
> > > > [...truncated 166770 lines...]
> > > > I0106 21:43:45.977841 31114 gc.cpp:54] Scheduling
> > > >
> > >
> >
> '/tmp/ContentType_SchedulerTest_Message_1_JMRPFg/slaves/57411c29-038d-4bfe-83a8-06d0b9446748-S0/frameworks/57411c29-038d-4bfe-83a8-06d0b9446748-'
> > > > for gc 6.8868343704days in the future
> > > > [   OK ] ContentType/SchedulerTest.Message/1 (789 ms)
> > > > [ RUN  ] ContentType/SchedulerTest.Request/0
> > > > I0106 21:43:46.117120 31091 leveldb.cpp:174] Opened db in
> 133.659852ms
> > > > I0106 21:43:46.158972 31091 leveldb.cpp:181] Compacted db in
> > 41.789647ms
> > > > I0106 21:43:46.159075 31091 leveldb.cpp:196] Created db iterator in
> > > 30181ns
> > > > I0106 21:43:46.159097 31091 leveldb.cpp:202] Seeked to beginning of
> db
> > in
> > > > 4088ns
> > > > I0106 21:43:46.159111 31091 leveldb.cpp:271] Iterated through 0 keys
> in
> > > > the db in 578ns
> > > > I0106 21:43:46.159198 31091 replica.cpp:779] Replica recovered with
> log
> > > > positions 0 -> 0 with 1 holes and 0 unlearned
> > > > I0106 21:43:46.159945 31120 recover.cpp:447] Starting replica
> recovery
> > > > I0106 21:43:46.160256 31120 recover.cpp:473] Replica is in EMPTY
> status
> > > > I0106 21:43:46.161675 31115 replica.cpp:673] Replica in EMPTY status
> > > > received a broadcasted recover request from (12975)@
> 172.17.9.187:45402
> > > > I0106 21:43:46.162204 31120 recover.cpp:193] Received a recover
> > response
> > > > from a replica in EMPTY status
> > > > I0106 21:43:46.162719 31122 recover.cpp:564] Updating replica status
> to
> > > > STARTING
> > > > I0106 21:43:46.163354 31125 master.cpp:365] Master
> > > > 8608ce8a-7930-4305-ad2d-c89bdb59ec96 (196efe71b632) started on
> > > > 172.17.9.187:45402
> > > > I0106 21:43:46.163534 31125 master.cpp:367] Flags at startup:
> --acls=""
> > > > --allocation_interval="1secs" --allocator="HierarchicalDRF"
> > > > --authenticate="false" --authenticate_slaves="true"
> > > > --authenticators="crammd5" --authorizers="local"
> > > > --credentials="/tmp/SNb1Y0/credentials" --framework_sorter="drf"
> > > > --help="false" --hostname_lookup="true"
> > > --initialize_driver_logging="true"
> > > > --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> > > > --max_slave_ping_timeouts="5" --quiet="false"
> > > > --recovery_slave_removal_limit="100%" --registry="replicated_log"
> > > > --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> > > > --registry_strict="true" --root_submissions="true"
> > > > --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> > > > --user_sorter="drf" --version="false"
> > > > --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> > > > --work_dir="/tmp/SNb1Y0/master" --zk_session_timeout="10secs"
> > > > I0106 21:43:46.163859 31125 master.cpp:414] Master allowing
> > > > unauthenticated frameworks to register
> > > > I0106 21:43:46.163873 31125 master.cpp:417] Master only allowing
> > > > authenticated slaves to register
> > > > I0106 21:43:46.163882 31125 credentials.hpp:35] Loading credentials
> for
> > > > authentication from '/tmp/SNb1Y0/credentials'
> > > > I0106 21:43:46.164273 31125 master.cpp:456] Using default 'crammd5'
> > > > authenticator
> > > > I0106 21:43:46.164417 31125 master.cpp:493] Authorization enabled
> > > > I0106 21:43:46.164578 31117 whitelist_watcher.cpp:77] No whitelist
> > given
> > > > I0106 21:43:46.164615 3 hierarchical.cpp:147] Initialized
> > > hierarchical
> > > > allocator process
> > > > I0106 21:43:46.166348 31122 master.cpp:1629] The newly elected leader
> > is
> > > > master@172.17.9.187:45402 with id
> 8608ce8a-7930-4305-ad2d-c89bdb59ec96
> > > > I0106 21:43:46.166390 31122 master.cpp:1642] Elected as the leading
> > > master!
> > > > I0106 21:43:46.166419 31122 master.cpp:1387] Recovering from
> registrar
> > > > I0106 21:43:46.166589 31113 registrar.cpp:307] Recovering registrar
> > > > I0106 21:43:46.201516 31115 leveldb.cpp:304] Persisting metadata (8
> 

Re: Build failed in Jenkins: Mesos » clang,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #1438

2016-01-05 Thread Benjamin Mahler
+vinod

Hm.. we use the '--rm' flag when invoking 'docker run' so I'm not sure why
the container isn't removed by the time we invoke 'docker rmi' upon exiting
our script.. any ideas?

On Tue, Jan 5, 2016 at 6:10 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1438/changes
> >
>
> Changes:
>
> [toenshoff] Removed redundant constructor.
>
> --
> [...truncated 156035 lines...]
> rm -f linux/routing/filter/.dirstamp
> rm -f linux/routing/*.lo
> rm -f linux/routing/link/.deps/.dirstamp
> rm -f linux/routing/diagnosis/*.o
> rm -f linux/routing/link/.dirstamp
> rm -f linux/routing/diagnosis/*.lo
> rm -f linux/routing/queueing/.deps/.dirstamp
> rm -f linux/routing/filter/*.o
> rm -f linux/routing/queueing/.dirstamp
> rm -f linux/routing/filter/*.lo
> rm -f local/.deps/.dirstamp
> rm -f linux/routing/link/*.o
> rm -f local/.dirstamp
> rm -f linux/routing/link/*.lo
> rm -f log/.deps/.dirstamp
> rm -f linux/routing/queueing/*.o
> rm -f log/.dirstamp
> rm -f linux/routing/queueing/*.lo
> rm -f local/*.o
> rm -f local/*.lo
> rm -f log/*.o
> rm -f log/tool/.deps/.dirstamp
> rm -f log/tool/.dirstamp
> rm -f log/*.lo
> rm -f logging/.deps/.dirstamp
> rm -f log/tool/*.o
> rm -f logging/.dirstamp
> rm -f log/tool/*.lo
> rm -f master/.deps/.dirstamp
> rm -f logging/*.o
> rm -f master/.dirstamp
> rm -f logging/*.lo
> rm -f master/allocator/.deps/.dirstamp
> rm -f master/*.o
> rm -f master/allocator/.dirstamp
> rm -f master/allocator/mesos/.deps/.dirstamp
> rm -f master/*.lo
> rm -f master/allocator/mesos/.dirstamp
> rm -f master/allocator/*.o
> rm -f master/allocator/sorter/drf/.deps/.dirstamp
> rm -f master/allocator/*.lo
> rm -f master/allocator/sorter/drf/.dirstamp
> rm -f master/allocator/mesos/*.o
> rm -f messages/.deps/.dirstamp
> rm -f master/allocator/mesos/*.lo
> rm -f messages/.dirstamp
> rm -f master/allocator/sorter/drf/*.o
> rm -f module/.deps/.dirstamp
> rm -f master/allocator/sorter/drf/*.lo
> rm -f module/.dirstamp
> rm -f messages/*.o
> rm -f sched/.deps/.dirstamp
> rm -f messages/*.lo
> rm -f sched/.dirstamp
> rm -f module/*.o
> rm -f scheduler/.deps/.dirstamp
> rm -f module/*.lo
> rm -f scheduler/.dirstamp
> rm -f sched/*.o
> rm -f slave/.deps/.dirstamp
> rm -f sched/*.lo
> rm -f slave/.dirstamp
> rm -f scheduler/*.o
> rm -f slave/container_loggers/.deps/.dirstamp
> rm -f scheduler/*.lo
> rm -f slave/container_loggers/.dirstamp
> rm -f slave/*.o
> rm -f slave/*.lo
> rm -f slave/containerizer/.deps/.dirstamp
> rm -f slave/container_loggers/*.o
> rm -f slave/containerizer/.dirstamp
> rm -f slave/container_loggers/*.lo
> rm -f slave/containerizer/mesos/.deps/.dirstamp
> rm -f slave/containerizer/*.o
> rm -f slave/containerizer/mesos/.dirstamp
> rm -f slave/containerizer/*.lo
> rm -f slave/containerizer/mesos/isolators/cgroups/.deps/.dirstamp
> rm -f slave/containerizer/mesos/*.o
> rm -f slave/containerizer/mesos/isolators/cgroups/.dirstamp
> rm -f slave/containerizer/mesos/*.lo
> rm -f slave/containerizer/mesos/isolators/filesystem/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/cgroups/*.o
> rm -f slave/containerizer/mesos/isolators/filesystem/.dirstamp
> rm -f slave/containerizer/mesos/isolators/cgroups/*.lo
> rm -f slave/containerizer/mesos/isolators/namespaces/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/filesystem/*.o
> rm -f slave/containerizer/mesos/isolators/namespaces/.dirstamp
> rm -f slave/containerizer/mesos/isolators/filesystem/*.lo
> rm -f slave/containerizer/mesos/isolators/network/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/namespaces/*.o
> rm -f slave/containerizer/mesos/isolators/network/.dirstamp
> rm -f slave/containerizer/mesos/isolators/namespaces/*.lo
> rm -f slave/containerizer/mesos/isolators/posix/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/network/*.o
> rm -f slave/containerizer/mesos/isolators/posix/.dirstamp
> rm -f slave/containerizer/mesos/isolators/network/*.lo
> rm -f slave/containerizer/mesos/provisioner/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/posix/*.o
> rm -f slave/containerizer/mesos/provisioner/.dirstamp
> rm -f slave/containerizer/mesos/isolators/posix/*.lo
> rm -f slave/containerizer/mesos/provisioner/appc/.deps/.dirstamp
> rm -f slave/containerizer/mesos/provisioner/*.o
> rm -f slave/containerizer/mesos/provisioner/appc/.dirstamp
> rm -f slave/containerizer/mesos/provisioner/*.lo
> rm -f slave/containerizer/mesos/provisioner/backends/.deps/.dirstamp
> rm -f slave/containerizer/mesos/provisioner/appc/*.o
> rm -f slave/containerizer/mesos/provisioner/backends/.dirstamp
> rm -f slave/containerizer/mesos/provisioner/appc/*.lo
> rm -f slave/containerizer/mesos/provisioner/docker/.deps/.dirstamp
> rm -f slave/containerizer/mesos/provisioner/backends/*.o
> rm -f 

Re: Build failed in Jenkins: Mesos » gcc,--verbose,ubuntu:14.04,docker||Hadoop #1412

2015-12-29 Thread Benjamin Mahler
Sorry about that. Reverted my change to add the xml reports given make
distclean doesn't pick them up.

I'll file a ticket for getting the reporting into jenkins.

On Tue, Dec 29, 2015 at 2:24 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1412/changes
> >
>
> Changes:
>
> [benjamin.mahler] Added XML test output to the jenkins docker build.
>
> --
> [...truncated 155105 lines...]
> rm -f linux/routing/filter/.dirstamp
> rm -rf ../include/mesos/module/.libs ../include/mesos/module/_libs
> rm -f linux/routing/link/*.o
> rm -f linux/routing/link/.deps/.dirstamp
> rm -rf ../include/mesos/quota/.libs ../include/mesos/quota/_libs
> rm -f linux/routing/link/*.lo
> rm -f linux/routing/link/.dirstamp
> rm -f linux/routing/queueing/*.o
> rm -rf ../include/mesos/scheduler/.libs ../include/mesos/scheduler/_libs
> rm -f linux/routing/queueing/.deps/.dirstamp
> rm -f linux/routing/queueing/*.lo
> rm -f linux/routing/queueing/.dirstamp
> rm -rf ../include/mesos/slave/.libs ../include/mesos/slave/_libs
> rm -f local/*.o
> rm -f local/.deps/.dirstamp
> rm -rf ../include/mesos/uri/.libs ../include/mesos/uri/_libs
> rm -f local/.dirstamp
> rm -f local/*.lo
> rm -rf ../include/mesos/v1/.libs ../include/mesos/v1/_libs
> rm -f log/.deps/.dirstamp
> rm -f log/*.o
> rm -rf ../include/mesos/v1/executor/.libs
> ../include/mesos/v1/executor/_libs
> rm -f log/.dirstamp
> rm -f log/*.lo
> rm -f log/tool/.deps/.dirstamp
> rm -rf ../include/mesos/v1/scheduler/.libs
> ../include/mesos/v1/scheduler/_libs
> rm -f log/tool/*.o
> rm -f log/tool/.dirstamp
> rm -rf authentication/cram_md5/.libs authentication/cram_md5/_libs
> rm -f log/tool/*.lo
> rm -f logging/*.o
> rm -f logging/.deps/.dirstamp
> rm -rf authorizer/.libs authorizer/_libs
> rm -f logging/*.lo
> rm -f logging/.dirstamp
> rm -rf authorizer/local/.libs authorizer/local/_libs
> rm -f master/*.o
> rm -f master/.deps/.dirstamp
> rm -rf common/.libs common/_libs
> rm -f master/*.lo
> rm -f master/.dirstamp
> rm -f master/allocator/.deps/.dirstamp
> rm -rf docker/.libs docker/_libs
> rm -f master/allocator/*.o
> rm -f master/allocator/.dirstamp
> rm -f master/allocator/*.lo
> rm -rf examples/.libs examples/_libs
> rm -f master/allocator/mesos/.deps/.dirstamp
> rm -f master/allocator/mesos/*.o
> rm -f master/allocator/mesos/.dirstamp
> rm -f master/allocator/mesos/*.lo
> rm -f master/allocator/sorter/drf/.deps/.dirstamp
> rm -rf exec/.libs exec/_libs
> rm -f master/allocator/sorter/drf/*.o
> rm -f master/allocator/sorter/drf/.dirstamp
> rm -f master/allocator/sorter/drf/*.lo
> rm -rf files/.libs files/_libs
> rm -f messages/.deps/.dirstamp
> rm -f messages/*.o
> rm -f messages/.dirstamp
> rm -rf hdfs/.libs hdfs/_libs
> rm -f messages/*.lo
> rm -f module/.deps/.dirstamp
> rm -f module/*.o
> rm -rf hook/.libs hook/_libs
> rm -f module/.dirstamp
> rm -f module/*.lo
> rm -rf internal/.libs internal/_libs
> rm -f sched/.deps/.dirstamp
> rm -f sched/*.o
> rm -rf java/jni/.libs java/jni/_libs
> rm -f sched/.dirstamp
> rm -f sched/*.lo
> rm -f scheduler/.deps/.dirstamp
> rm -f scheduler/*.o
> rm -rf jvm/.libs jvm/_libs
> rm -f scheduler/.dirstamp
> rm -f scheduler/*.lo
> rm -rf jvm/org/apache/.libs jvm/org/apache/_libs
> rm -f slave/.deps/.dirstamp
> rm -f slave/*.o
> rm -f slave/.dirstamp
> rm -rf linux/.libs linux/_libs
> rm -f slave/*.lo
> rm -f slave/container_loggers/.deps/.dirstamp
> rm -f slave/container_loggers/*.o
> rm -f slave/container_loggers/.dirstamp
> rm -f slave/container_loggers/*.lo
> rm -f slave/containerizer/.deps/.dirstamp
> rm -rf linux/routing/.libs linux/routing/_libs
> rm -f slave/containerizer/*.o
> rm -f slave/containerizer/.dirstamp
> rm -rf linux/routing/diagnosis/.libs linux/routing/diagnosis/_libs
> rm -f slave/containerizer/*.lo
> rm -f slave/containerizer/mesos/.deps/.dirstamp
> rm -rf linux/routing/filter/.libs linux/routing/filter/_libs
> rm -f slave/containerizer/mesos/.dirstamp
> rm -f slave/containerizer/mesos/*.o
> rm -rf linux/routing/link/.libs linux/routing/link/_libs
> rm -f slave/containerizer/mesos/isolators/cgroups/.deps/.dirstamp
> rm -f slave/containerizer/mesos/*.lo
> rm -rf linux/routing/queueing/.libs linux/routing/queueing/_libs
> rm -f slave/containerizer/mesos/isolators/cgroups/.dirstamp
> rm -f slave/containerizer/mesos/isolators/cgroups/*.o
> rm -rf local/.libs local/_libs
> rm -f slave/containerizer/mesos/isolators/filesystem/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/cgroups/*.lo
> rm -f slave/containerizer/mesos/isolators/filesystem/.dirstamp
> rm -rf log/.libs log/_libs
> rm -f slave/containerizer/mesos/isolators/filesystem/*.o
> rm -f slave/containerizer/mesos/isolators/namespaces/.deps/.dirstamp
> rm -f slave/containerizer/mesos/isolators/filesystem/*.lo
> rm -f slave/containerizer/mesos/isolators/namespaces/.dirstamp
> rm -f 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #1395

2015-12-29 Thread Benjamin Mahler
Filed: https://issues.apache.org/jira/browse/MESOS-4257

On Mon, Dec 21, 2015 at 2:21 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1395/changes
> >
>
> Changes:
>
> [mpark] Removed extra ';' after inline class function definitions.
>
> [mpark] Cleaned up formatting for multiple inheritance.
>
> --
> [...truncated 69381 lines...]
> I1221 22:20:04.184761   405 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:04.184831   405 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 611475ns
> I1221 22:20:05.186663   413 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:05.186728   413 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 478649ns
> I1221 22:20:06.188304   412 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:06.188375   412 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 602236ns
> I1221 22:20:06.274068   407 slave.cpp:4236] Current disk usage 3.93%. Max
> allowed age: 6.025146828209665days
> I1221 22:20:06.274101   415 slave.cpp:4236] Current disk usage 3.93%. Max
> allowed age: 6.025146828209665days
> I1221 22:20:06.274068   417 slave.cpp:4236] Current disk usage 3.93%. Max
> allowed age: 6.025146828209665days
> I1221 22:20:07.190656   407 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:07.190717   407 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 638504ns
> I1221 22:20:07.266537   406 slave.cpp:3371] Received ping from
> slave-observer(1)@172.17.0.2:60254
> I1221 22:20:07.267007   419 slave.cpp:3371] Received ping from
> slave-observer(2)@172.17.0.2:60254
> I1221 22:20:07.267308   420 slave.cpp:4599] Querying resource estimator
> for oversubscribable resources
> I1221 22:20:07.267601   420 slave.cpp:4613] Received oversubscribable
> resources  from the resource estimator
> I1221 22:20:07.267938   407 slave.cpp:4599] Querying resource estimator
> for oversubscribable resources
> I1221 22:20:07.268116   407 slave.cpp:4613] Received oversubscribable
> resources  from the resource estimator
> I1221 22:20:07.268371   416 slave.cpp:4599] Querying resource estimator
> for oversubscribable resources
> I1221 22:20:07.268558   416 slave.cpp:4613] Received oversubscribable
> resources  from the resource estimator
> I1221 22:20:07.917891   411 slave.cpp:3371] Received ping from
> slave-observer(3)@172.17.0.2:60254
> I1221 22:20:08.192203   415 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:08.192268   415 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 517781ns
> I1221 22:20:09.194166   408 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:09.194242   408 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 579094ns
> I1221 22:20:10.195190   405 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:10.195255   405 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 522234ns
> I1221 22:20:11.196250   418 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:11.196318   418 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 557424ns
> I1221 22:20:12.197145   414 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:12.197209   414 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 470512ns
> I1221 22:20:13.199066   409 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:13.199131   409 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 503241ns
> I1221 22:20:14.201129   412 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:14.201205   412 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 618819ns
> I1221 22:20:15.202113   406 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:15.202178   406 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 458195ns
> I1221 22:20:16.204035   419 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:16.204097   419 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 475311ns
> I1221 22:20:17.205832   420 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:17.205893   420 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 450188ns
> I1221 22:20:18.207190   415 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:18.207252   415 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 503137ns
> I1221 22:20:19.208309   417 hierarchical.cpp:1329] No resources available
> to allocate!
> I1221 22:20:19.208394   417 hierarchical.cpp:1079] Performed allocation
> for 3 slaves in 605368ns
> I1221 22:20:20.209298   407 hierarchical.cpp:1329] No resources 

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1353

2015-12-12 Thread Benjamin Mahler
Will do

On Saturday, December 12, 2015, Michael Park  wrote:

> Looks like the tests are hanging at various places locally on my OS X box
> it hangs on *FetcherCacheHttpTest.HttpCachedConcurrent*. Could you take a
> look?
>
> On Sat, Dec 12, 2015 at 1:51 AM Apache Jenkins Server <
> jenk...@builds.apache.org > wrote:
>
> > See <
> >
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1353/
> > >
> >
> > --
> > [...truncated 131592 lines...]
> > 2015-12-12
> > 06:37:57,286:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:00,623:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:03,960:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:07,298:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:10,634:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:13,971:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:17,307:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:20,643:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:23,979:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:27,315:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:30,652:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:33,988:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:37,325:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:40,664:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:44,001:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:47,337:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:50,674:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:54,011:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:38:57,347:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:39:00,683:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
> > server refused to accept the client
> > 2015-12-12
> > 06:39:04,019:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
> @1697:
> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection 

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1353

2015-12-12 Thread Benjamin Mahler
I pushed a fix for the deadlock issue in HttpTest.Endpoints.

There appears to also be issues with ProcessTest.Http1 and
ProcessTest.Http2. I'll look into those as well.

On Sat, Dec 12, 2015 at 8:40 AM, Benjamin Mahler <benjamin.mah...@gmail.com>
wrote:

> Will do
>
>
> On Saturday, December 12, 2015, Michael Park <mp...@apache.org> wrote:
>
>> Looks like the tests are hanging at various places locally on my OS X box
>> it hangs on *FetcherCacheHttpTest.HttpCachedConcurrent*. Could you take a
>> look?
>>
>> On Sat, Dec 12, 2015 at 1:51 AM Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>>
>> > See <
>> >
>> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1353/
>> > >
>> >
>> > --
>> > [...truncated 131592 lines...]
>> > 2015-12-12
>> > 06:37:57,286:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:00,623:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:03,960:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:07,298:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:10,634:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:13,971:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:17,307:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:20,643:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:23,979:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:27,315:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:30,652:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:33,988:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:37,325:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:40,664:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:44,001:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>> @1697:
>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection refused):
>> > server refused to accept the client
>> > 2015-12-12
>> > 06:38:47,337:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1353

2015-12-12 Thread Benjamin Mahler
Fix for ProcessTest.Http1:
https://reviews.apache.org/r/41318/

Some Subprocess test cleanups that I noticed were needed when I fixed the
fd leak:
https://reviews.apache.org/r/41320/

On Sat, Dec 12, 2015 at 2:33 PM, Benjamin Mahler <benjamin.mah...@gmail.com>
wrote:

> The issues with ProcessTest.Http1 and ProcessTest.Http2 appear to be
> existing issues, unrelated to the authentication changes.
>
> ProcessTest.Http1 hangs after a number of iterations because it uses
> http::post to do libprocess message passing but it uses the "User-Agent"
> header which means libprocess does not reply with a 202. I'll update it to
> use http::connect and explicitly close the connection. I noticed that
> there was a file descriptor leak in the subprocess tests when I was looking
> at leaks here.
>
> Haven't had a chance to look at ProcessTest.Http2 yet.
>
> I also noticed that ProcessTest.Remote and ProcessTest.Http1 seem to hit a
> resource limitation around 16K runs, which leads to:
> (socket.connect(process.self().address)).failure(): Failed to connect to
> 192.168.1.88:52713: Can't assign requested address
>
> Needless to say, having CI run the tests in repetition would be great. :)
>
> On Sat, Dec 12, 2015 at 11:12 AM, Benjamin Mahler <
> benjamin.mah...@gmail.com> wrote:
>
>> I pushed a fix for the deadlock issue in HttpTest.Endpoints.
>>
>> There appears to also be issues with ProcessTest.Http1 and
>> ProcessTest.Http2. I'll look into those as well.
>>
>> On Sat, Dec 12, 2015 at 8:40 AM, Benjamin Mahler <
>> benjamin.mah...@gmail.com> wrote:
>>
>>> Will do
>>>
>>>
>>> On Saturday, December 12, 2015, Michael Park <mp...@apache.org> wrote:
>>>
>>>> Looks like the tests are hanging at various places locally on my OS X
>>>> box
>>>> it hangs on *FetcherCacheHttpTest.HttpCachedConcurrent*. Could you take
>>>> a
>>>> look?
>>>>
>>>> On Sat, Dec 12, 2015 at 1:51 AM Apache Jenkins Server <
>>>> jenk...@builds.apache.org> wrote:
>>>>
>>>> > See <
>>>> >
>>>> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1353/
>>>> > >
>>>> >
>>>> > --
>>>> > [...truncated 131592 lines...]
>>>> > 2015-12-12
>>>> > 06:37:57,286:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:00,623:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:03,960:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:07,298:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:10,634:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:13,971:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:17,307:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>>> > Socket [127.0.0.1:34374] zk retcode=-4, errno=111(Connection
>>>> refused):
>>>> > server refused to accept the client
>>>> > 2015-12-12
>>>> > 06:38:20,643:28721(0x2b23b3a0d700):ZOO_ERROR@handle_socket_error_msg
>>>> @1697:
>>

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1340

2015-12-11 Thread Benjamin Mahler
Fixed, sorry about that!

On Wed, Dec 9, 2015 at 5:55 PM, Joseph Wu  wrote:

> Filed a JIRA:
> https://issues.apache.org/jira/browse/MESOS-4109
>
> On Wed, Dec 9, 2015 at 5:44 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
> > See <
> >
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1340/changes
> > >
> >
> > Changes:
> >
> > [joris.van.remoortere] CMake: Added missing source files to
> > src/CMakeLists.txt.
> >
> > --
> > [...truncated 151165 lines...]
> > I1210 01:44:23.736732 28738 gc.cpp:54] Scheduling
> >
> '/tmp/ContentType_SchedulerTest_Message_1_Z5f3Ge/slaves/bda7bcb3-1bb3-443f-b281-fe467f538cde-S0/frameworks/bda7bcb3-1bb3-443f-b281-fe467f538cde-/executors/default/runs/5dc4e664-9569-45d1-a61b-987931a021ec'
> > for gc 6.9147430518days in the future
> > I1210 01:44:23.736867 28737 slave.cpp:3947] Cleaning up framework
> > bda7bcb3-1bb3-443f-b281-fe467f538cde-
> > I1210 01:44:23.737170 28738 gc.cpp:54] Scheduling
> >
> '/tmp/ContentType_SchedulerTest_Message_1_Z5f3Ge/slaves/bda7bcb3-1bb3-443f-b281-fe467f538cde-S0/frameworks/bda7bcb3-1bb3-443f-b281-fe467f538cde-/executors/default'
> > for gc 6.9147213037days in the future
> > I1210 01:44:23.737339 28735 status_update_manager.cpp:282] Closing status
> > update streams for framework bda7bcb3-1bb3-443f-b281-fe467f538cde-
> > I1210 01:44:23.737475 28738 gc.cpp:54] Scheduling
> >
> '/tmp/ContentType_SchedulerTest_Message_1_Z5f3Ge/slaves/bda7bcb3-1bb3-443f-b281-fe467f538cde-S0/frameworks/bda7bcb3-1bb3-443f-b281-fe467f538cde-'
> > for gc 6.9146855407days in the future
> > I1210 01:44:23.737417 28735 status_update_manager.cpp:528] Cleaning up
> > status update stream for task 641dec3b-cc40-4065-9de9-00855205434e of
> > framework bda7bcb3-1bb3-443f-b281-fe467f538cde-
> > [   OK ] ContentType/SchedulerTest.Message/1 (194 ms)
> > [ RUN  ] ContentType/SchedulerTest.Request/0
> > I1210 01:44:23.749857 28710 leveldb.cpp:174] Opened db in 2.849183ms
> > I1210 01:44:23.750905 28710 leveldb.cpp:181] Compacted db in 828460ns
> > I1210 01:44:23.751086 28710 leveldb.cpp:196] Created db iterator in
> 37293ns
> > I1210 01:44:23.751255 28710 leveldb.cpp:202] Seeked to beginning of db in
> > 14742ns
> > I1210 01:44:23.751406 28710 leveldb.cpp:271] Iterated through 0 keys in
> > the db in 13387ns
> > I1210 01:44:23.751592 28710 replica.cpp:778] Replica recovered with log
> > positions 0 -> 0 with 1 holes and 0 unlearned
> > I1210 01:44:23.753577 28733 recover.cpp:447] Starting replica recovery
> > I1210 01:44:23.755203 28739 master.cpp:365] Master
> > 106c42a2-8dc3-4823-a856-454a3a50e7fc (a5faf7d4a82d) started on
> > 172.17.0.3:48381
> > I1210 01:44:23.755913 28733 recover.cpp:473] Replica is in EMPTY status
> > I1210 01:44:23.756075 28739 master.cpp:367] Flags at startup: --acls=""
> > --allocation_interval="1secs" --allocator="HierarchicalDRF"
> > --authenticate="false" --authenticate_slaves="true"
> > --authenticators="crammd5" --authorizers="local"
> > --credentials="/tmp/cRsF7A/credentials" --framework_sorter="drf"
> > --help="false" --hostname_lookup="true"
> --initialize_driver_logging="true"
> > --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> > --max_slave_ping_timeouts="5" --quiet="false"
> > --recovery_slave_removal_limit="100%" --registry="replicated_log"
> > --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> > --registry_strict="true" --root_submissions="true"
> > --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> > --user_sorter="drf" --version="false"
> > --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> > --work_dir="/tmp/cRsF7A/master" --zk_session_timeout="10secs"
> > I1210 01:44:23.756860 28739 master.cpp:414] Master allowing
> > unauthenticated frameworks to register
> > I1210 01:44:23.757125 28739 master.cpp:417] Master only allowing
> > authenticated slaves to register
> > I1210 01:44:23.757272 28739 credentials.hpp:35] Loading credentials for
> > authentication from '/tmp/cRsF7A/credentials'
> > I1210 01:44:23.757761 28739 master.cpp:456] Using default 'crammd5'
> > authenticator
> > I1210 01:44:23.757885 28731 replica.cpp:674] Replica in EMPTY status
> > received a broadcasted recover request from (11453)@172.17.0.3:48381
> > I1210 01:44:23.757920 28739 master.cpp:493] Authorization enabled
> > I1210 01:44:23.758680 28733 recover.cpp:193] Received a recover response
> > from a replica in EMPTY status
> > I1210 01:44:23.759109 28742 hierarchical.cpp:163] Initialized
> hierarchical
> > allocator process
> > I1210 01:44:23.759220 28736 whitelist_watcher.cpp:77] No whitelist given
> > I1210 01:44:23.759279 28742 recover.cpp:564] Updating replica status to
> > STARTING
> > I1210 01:44:23.760128 28733 leveldb.cpp:304] Persisting metadata (8
> bytes)
> > to leveldb took 636122ns
> > I1210 

Re: Build failed in Jenkins: Mesos » gcc,--verbose,centos:7,docker||Hadoop #1336

2015-12-09 Thread Benjamin Mahler
https://issues.apache.org/jira/browse/MESOS-4024

On Wed, Dec 9, 2015 at 1:32 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose,OS=centos%3A7,label_exp=docker%7C%7CHadoop/1336/changes
> >
>
> Changes:
>
> [tnachen] Add mesos provisioner doc.
>
> --
> [...truncated 153346 lines...]
> I1209 09:32:06.646441 31386 slave.cpp:3859] Cleaning up executor 'default'
> of framework 5fb5da44-0c67-4ea6-b01c-da448034d0b6- at executor(135)@
> 172.17.0.1:36768
> I1209 09:32:06.646623 31383 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_hnwizd/slaves/5fb5da44-0c67-4ea6-b01c-da448034d0b6-S0/frameworks/5fb5da44-0c67-4ea6-b01c-da448034d0b6-/executors/default/runs/995547a3-580a-4852-a3de-9c947f10e9b6'
> for gc 6.9251674667days in the future
> I1209 09:32:06.646757 31386 slave.cpp:3947] Cleaning up framework
> 5fb5da44-0c67-4ea6-b01c-da448034d0b6-
> I1209 09:32:06.646775 31383 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_hnwizd/slaves/5fb5da44-0c67-4ea6-b01c-da448034d0b6-S0/frameworks/5fb5da44-0c67-4ea6-b01c-da448034d0b6-/executors/default'
> for gc 6.9251503407days in the future
> I1209 09:32:06.646872 31378 status_update_manager.cpp:282] Closing status
> update streams for framework 5fb5da44-0c67-4ea6-b01c-da448034d0b6-
> I1209 09:32:06.646934 31378 status_update_manager.cpp:528] Cleaning up
> status update stream for task 5bcaf46a-6505-47fc-bb3c-b47682ea632e of
> framework 5fb5da44-0c67-4ea6-b01c-da448034d0b6-
> I1209 09:32:06.646956 31389 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_hnwizd/slaves/5fb5da44-0c67-4ea6-b01c-da448034d0b6-S0/frameworks/5fb5da44-0c67-4ea6-b01c-da448034d0b6-'
> for gc 6.9251305481days in the future
> [   OK ] ContentType/SchedulerTest.Message/1 (626 ms)
> [ RUN  ] ContentType/SchedulerTest.Request/0
> I1209 09:32:06.764993 31357 leveldb.cpp:174] Opened db in 113.871537ms
> I1209 09:32:06.816138 31357 leveldb.cpp:181] Compacted db in 51.06532ms
> I1209 09:32:06.816267 31357 leveldb.cpp:196] Created db iterator in 30876ns
> I1209 09:32:06.816283 31357 leveldb.cpp:202] Seeked to beginning of db in
> 4150ns
> I1209 09:32:06.816292 31357 leveldb.cpp:271] Iterated through 0 keys in
> the db in 251ns
> I1209 09:32:06.816346 31357 replica.cpp:778] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I1209 09:32:06.817158 31391 recover.cpp:447] Starting replica recovery
> I1209 09:32:06.817488 31391 recover.cpp:473] Replica is in EMPTY status
> I1209 09:32:06.818835 31376 replica.cpp:674] Replica in EMPTY status
> received a broadcasted recover request from (11515)@172.17.0.1:36768
> I1209 09:32:06.819315 31387 recover.cpp:193] Received a recover response
> from a replica in EMPTY status
> I1209 09:32:06.819808 31382 recover.cpp:564] Updating replica status to
> STARTING
> I1209 09:32:06.820646 31381 master.cpp:365] Master
> 8915e203-6fb7-47de-9efa-cbd35e80f323 (83469567f7e5) started on
> 172.17.0.1:36768
> I1209 09:32:06.820677 31381 master.cpp:367] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_slaves="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/EXBunl/credentials" --framework_sorter="drf"
> --help="false" --hostname_lookup="true" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_slave_ping_timeouts="5" --quiet="false"
> --recovery_slave_removal_limit="100%" --registry="replicated_log"
> --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> --registry_strict="true" --root_submissions="true"
> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> --user_sorter="drf" --version="false"
> --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> --work_dir="/tmp/EXBunl/master" --zk_session_timeout="10secs"
> I1209 09:32:06.821024 31381 master.cpp:414] Master allowing
> unauthenticated frameworks to register
> I1209 09:32:06.821053 31381 master.cpp:417] Master only allowing
> authenticated slaves to register
> I1209 09:32:06.821072 31381 credentials.hpp:35] Loading credentials for
> authentication from '/tmp/EXBunl/credentials'
> I1209 09:32:06.821445 31381 master.cpp:456] Using default 'crammd5'
> authenticator
> I1209 09:32:06.821604 31381 master.cpp:493] Authorization enabled
> I1209 09:32:06.821856 31391 whitelist_watcher.cpp:77] No whitelist given
> I1209 09:32:06.821907 31377 hierarchical.cpp:163] Initialized hierarchical
> allocator process
> I1209 09:32:06.823684 31378 master.cpp:1637] The newly elected leader is
> master@172.17.0.1:36768 with id 8915e203-6fb7-47de-9efa-cbd35e80f323
> I1209 09:32:06.823719 31378 master.cpp:1650] Elected as the leading master!
> I1209 09:32:06.823737 31378 master.cpp:1395] Recovering from 

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1333

2015-12-08 Thread Benjamin Mahler
Hm.. the following looks like a bug in the git plugin:

stderr: fatal: reference is not a tree: 8833974195e76e6e7cd8377fb511aa
2f96e409e6

I suspect this might be related to the fact that we do a shallow clone, and
there seems to be a double fetch in the above output. If the SHA is
obtained after the first fetch, and the second fetch retrieves another
commit, we can no longer check out that SHA.

E.g.
http://help.appveyor.com/discussions/problems/1272-fatal-reference-is-not-a-tree-sha-on-git-checkout

On Tue, Dec 8, 2015 at 3:59 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1333/
> >
>
> --
> Started by upstream project "Mesos" build number 1333
> originally caused by:
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
>  Started by an SCM change
> [EnvInject] - Loading node environment variables.
> Building remotely on H0 (Hadoop Tez) in workspace <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/ws/
> >
> Cloning the remote Git repository
> Using shallow clone
> Cloning repository https://git-wip-us.apache.org/repos/asf/mesos.git
>  > git init <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/ws/>
> # timeout=10
> Fetching upstream changes from
> https://git-wip-us.apache.org/repos/asf/mesos.git
>  > git --version # timeout=10
>  > git -c core.askpass=true fetch --tags --progress
> https://git-wip-us.apache.org/repos/asf/mesos.git
> +refs/heads/*:refs/remotes/origin/* --depth=1 # timeout=60
>  > git config remote.origin.url
> https://git-wip-us.apache.org/repos/asf/mesos.git # timeout=10
>  > git config --add remote.origin.fetch
> +refs/heads/*:refs/remotes/origin/* # timeout=10
>  > git config remote.origin.url
> https://git-wip-us.apache.org/repos/asf/mesos.git # timeout=10
> Fetching upstream changes from
> https://git-wip-us.apache.org/repos/asf/mesos.git
>  > git -c core.askpass=true fetch --tags --progress
> https://git-wip-us.apache.org/repos/asf/mesos.git
> +refs/heads/*:refs/remotes/origin/* # timeout=60
> Checking out Revision 8833974195e76e6e7cd8377fb511aa2f96e409e6
> (origin/master)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f 8833974195e76e6e7cd8377fb511aa2f96e409e6
> FATAL: Could not checkout 8833974195e76e6e7cd8377fb511aa2f96e409e6
> hudson.plugins.git.GitException: Could not checkout
> 8833974195e76e6e7cd8377fb511aa2f96e409e6
> at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$8.execute(CliGitAPIImpl.java:1907)
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
> at hudson.remoting.UserRequest.perform(UserRequest.java:121)
> at hudson.remoting.UserRequest.perform(UserRequest.java:49)
> at hudson.remoting.Request$2.run(Request.java:326)
> at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> at ..remote call to H0(Native Method)
> at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
> at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
> at hudson.remoting.Channel.call(Channel.java:778)
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
> at sun.reflect.GeneratedMethodAccessor440.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
> at com.sun.proxy.$Proxy137.execute(Unknown Source)
> at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1060)
> at hudson.scm.SCM.checkout(SCM.java:484)
> at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
> at
> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
> at
> jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
> at
> 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #1331

2015-12-08 Thread Benjamin Mahler
Thanks! We should remove the "health" code related to this, was never
finished or used.

On Tue, Dec 8, 2015 at 1:19 PM, Neil Conway  wrote:

> This build failure is due to a broken test
> (HealthTest.ObserveEndpoint), and has already been fixed in master:
>
> https://reviews.apache.org/r/41100/
>
> Neil
>
> On Tue, Dec 8, 2015 at 1:13 PM, Apache Jenkins Server
>  wrote:
> > See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1331/
> >
> >
> > --
> > [...truncated 149982 lines...]
> > I1208 21:13:32.888731 31805 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_UcnUDi/slaves/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-S0/frameworks/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-/executors/default/runs/5042430f-2899-4d52-9124-7c8072b1f90c'
> for gc 6.8971481481days in the future
> > I1208 21:13:32.69 31806 slave.cpp:3947] Cleaning up framework
> 4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-
> > I1208 21:13:32.888994 31804 status_update_manager.cpp:282] Closing
> status update streams for framework
> 4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-
> > I1208 21:13:32.889000 31805 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_UcnUDi/slaves/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-S0/frameworks/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-/executors/default'
> for gc 6.8971283556days in the future
> > I1208 21:13:32.889173 31804 status_update_manager.cpp:528] Cleaning up
> status update stream for task ce556009-8357-4b71-ae64-32efac0c6a76 of
> framework 4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-
> > I1208 21:13:32.889294 31805 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_UcnUDi/slaves/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-S0/frameworks/4df7a4d2-ec02-4b66-b29a-6e32f081f4b3-'
> for gc 6.8971096days in the future
> > [   OK ] ContentType/SchedulerTest.Message/1 (116 ms)
> > [ RUN  ] ContentType/SchedulerTest.Request/0
> > I1208 21:13:32.900418 31775 leveldb.cpp:174] Opened db in 5.410505ms
> > I1208 21:13:32.901538 31775 leveldb.cpp:181] Compacted db in 1.036746ms
> > I1208 21:13:32.901629 31775 leveldb.cpp:196] Created db iterator in
> 25385ns
> > I1208 21:13:32.901646 31775 leveldb.cpp:202] Seeked to beginning of db
> in 2077ns
> > I1208 21:13:32.901657 31775 leveldb.cpp:271] Iterated through 0 keys in
> the db in 345ns
> > I1208 21:13:32.901715 31775 replica.cpp:778] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> > I1208 21:13:32.902657 31796 recover.cpp:447] Starting replica recovery
> > I1208 21:13:32.903240 31795 recover.cpp:473] Replica is in EMPTY status
> > I1208 21:13:32.904693 31798 replica.cpp:674] Replica in EMPTY status
> received a broadcasted recover request from (11493)@172.17.0.1:51123
> > I1208 21:13:32.905571 31806 recover.cpp:193] Received a recover response
> from a replica in EMPTY status
> > I1208 21:13:32.906330 31805 recover.cpp:564] Updating replica status to
> STARTING
> > I1208 21:13:32.907173 31803 leveldb.cpp:304] Persisting metadata (8
> bytes) to leveldb took 656317ns
> > I1208 21:13:32.907210 31803 replica.cpp:321] Persisted replica status to
> STARTING
> > I1208 21:13:32.907413 31803 recover.cpp:473] Replica is in STARTING
> status
> > I1208 21:13:32.908320 31802 master.cpp:365] Master
> 82f3b2af-5808-46f2-ae60-6bae2d0029cf (f652123f5286) started on
> 172.17.0.1:51123
> > I1208 21:13:32.908345 31802 master.cpp:367] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_slaves="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/N8LyKl/credentials" --framework_sorter="drf"
> --help="false" --hostname_lookup="true" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_slave_ping_timeouts="5" --quiet="false"
> --recovery_slave_removal_limit="100%" --registry="replicated_log"
> --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> --registry_strict="true" --root_submissions="true"
> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> --user_sorter="drf" --version="false"
> --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> --work_dir="/tmp/N8LyKl/master" --zk_session_timeout="10secs"
> > I1208 21:13:32.908740 31802 master.cpp:414] Master allowing
> unauthenticated frameworks to register
> > I1208 21:13:32.908751 31802 master.cpp:417] Master only allowing
> authenticated slaves to register
> > I1208 21:13:32.908759 31802 credentials.hpp:35] Loading credentials for
> authentication from '/tmp/N8LyKl/credentials'
> > I1208 21:13:32.909150 31802 master.cpp:456] Using default 'crammd5'
> authenticator
> > I1208 21:13:32.909302 31802 master.cpp:493] Authorization enabled
> > I1208 21:13:32.909888 31803 

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #1317

2015-12-03 Thread Benjamin Mahler
https://issues.apache.org/jira/browse/MESOS-4059

Sorry Joseph on my last email I didn't see your ticket. Would be great to
reply to all build failures for a particular test because it makes it clear
when a build failure is being handled.

On Thu, Dec 3, 2015 at 12:10 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1317/changes
> >
>
> Changes:
>
> [joris.van.remoortere] Quota: Updated allocate() in the hierarchical
> allocator.
>
> --
> [...truncated 66921 lines...]
> I1203 20:10:00.699725 28747 master.cpp:5338] Successfully authenticated
> principal 'test-principal' at slave(112)@172.17.0.2:59124
> I1203 20:10:00.699808 28747 authenticator.cpp:431] Authentication session
> cleanup for crammd5_authenticatee(288)@172.17.0.2:59124
> I1203 20:10:00.700198 28742 slave.cpp:858] Successfully authenticated with
> master master@172.17.0.2:59124
> I1203 20:10:00.700353 28742 slave.cpp:1252] Will retry registration in
> 8.210485ms if necessary
> I1203 20:10:00.700698 28742 master.cpp:4017] Registering slave at
> slave(112)@172.17.0.2:59124 (maintenance-host-2) with id
> 52a08176-53ce-4791-8efa-0de027849fe4-S1
> I1203 20:10:00.701298 28742 registrar.cpp:439] Applied 1 operations in
> 121263ns; attempting to update the 'registry'
> I1203 20:10:00.702785 28743 log.cpp:683] Attempting to append 515 bytes to
> the log
> I1203 20:10:00.703001 28739 coordinator.cpp:348] Coordinator attempting to
> write APPEND action at position 5
> I1203 20:10:00.705103 28739 replica.cpp:538] Replica received write
> request for position 5 from (3796)@172.17.0.2:59124
> I1203 20:10:00.705418 28739 leveldb.cpp:341] Persisting action (534 bytes)
> to leveldb took 269815ns
> I1203 20:10:00.705451 28739 replica.cpp:713] Persisted action at 5
> I1203 20:10:00.706810 28745 replica.cpp:692] Replica received learned
> notice for position 5 from @0.0.0.0:0
> I1203 20:10:00.707990 28745 leveldb.cpp:341] Persisting action (536 bytes)
> to leveldb took 1.163208ms
> I1203 20:10:00.708041 28745 replica.cpp:713] Persisted action at 5
> I1203 20:10:00.708071 28745 replica.cpp:698] Replica learned APPEND action
> at position 5
> I1203 20:10:00.711784 28745 slave.cpp:1252] Will retry registration in
> 9.105101ms if necessary
> I1203 20:10:00.712334 28743 master.cpp:4005] Ignoring register slave
> message from slave(112)@172.17.0.2:59124 (maintenance-host-2) as
> admission is already in progress
> I1203 20:10:00.713016 28742 log.cpp:702] Attempting to truncate the log to
> 5
> I1203 20:10:00.713289 28742 coordinator.cpp:348] Coordinator attempting to
> write TRUNCATE action at position 6
> I1203 20:10:00.714016 28739 registrar.cpp:484] Successfully updated the
> 'registry' in 12.614144ms
> I1203 20:10:00.714514 28742 replica.cpp:538] Replica received write
> request for position 6 from (3797)@172.17.0.2:59124
> I1203 20:10:00.715093 28750 slave.cpp:3197] Received ping from
> slave-observer(115)@172.17.0.2:59124
> I1203 20:10:00.715111 28739 master.cpp:4085] Registered slave
> 52a08176-53ce-4791-8efa-0de027849fe4-S1 at slave(112)@172.17.0.2:59124
> (maintenance-host-2) with cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000]
> I1203 20:10:00.715380 28750 slave.cpp:902] Registered with master
> master@172.17.0.2:59124; given slave ID
> 52a08176-53ce-4791-8efa-0de027849fe4-S1
> I1203 20:10:00.715409 28750 fetcher.cpp:79] Clearing fetcher cache
> I1203 20:10:00.715880 28750 slave.cpp:925] Checkpointing SlaveInfo to
> '/tmp/MasterMaintenanceTest_InverseOffersFilters_n1w6Ys/meta/slaves/52a08176-53ce-4791-8efa-0de027849fe4-S1/
> slave.info'
> I1203 20:10:00.716008 28749 status_update_manager.cpp:181] Resuming
> sending status updates
> I1203 20:10:00.716219 28739 hierarchical.cpp:380] Added slave
> 52a08176-53ce-4791-8efa-0de027849fe4-S1 (maintenance-host-2) with
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: )
> I1203 20:10:00.716397 28750 slave.cpp:961] Forwarding total oversubscribed
> resources
> I1203 20:10:00.716522 28739 hierarchical.cpp:1218] No resources available
> to allocate!
> I1203 20:10:00.716575 28739 hierarchical.cpp:1311] No inverse offers to
> send out!
> I1203 20:10:00.716578 28750 master.cpp:4427] Received update of slave
> 52a08176-53ce-4791-8efa-0de027849fe4-S1 at slave(112)@172.17.0.2:59124
> (maintenance-host-2) with total oversubscribed resources
> I1203 20:10:00.716608 28739 hierarchical.cpp:973] Performed allocation for
> slave 52a08176-53ce-4791-8efa-0de027849fe4-S1 in 350185ns
> I1203 20:10:00.718005 28742 leveldb.cpp:341] Persisting action (16 bytes)
> to leveldb took 3.465859ms
> I1203 20:10:00.718051 28742 replica.cpp:713] Persisted action at 6
> I1203 20:10:00.718312 28750 hierarchical.cpp:434] Slave
> 52a08176-53ce-4791-8efa-0de027849fe4-S1 (maintenance-host-2) updated with
> oversubscribed resources  

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #1319

2015-12-03 Thread Benjamin Mahler
Michael can you please triage this test?

On Thu, Dec 3, 2015 at 7:31 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1319/changes
> >
>
> Changes:
>
> [benjamin.mahler] Documented why OsSignalsTest.Suppress works on OS X.
>
> --
> [...truncated 151042 lines...]
> I1204 03:31:52.023347 31790 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-/executors/default/runs/ad6c9ae3-14b4-4ae0-85e0-2b66874c1d0f'
> for gc 6.9973126222days in the future
> I1204 03:31:52.023437 31792 slave.cpp:3773] Cleaning up framework
> e567d75e-4b42-4184-a487-498d6e3e86f0-
> I1204 03:31:52.023515 31790 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-/executors/default'
> for gc 6.9972944593days in the future
> I1204 03:31:52.023558 31794 status_update_manager.cpp:282] Closing status
> update streams for framework e567d75e-4b42-4184-a487-498d6e3e86f0-
> I1204 03:31:52.023623 31794 status_update_manager.cpp:528] Cleaning up
> status update stream for task 3ee27512-7308-4ddf-8ab0-eb17196ad555 of
> framework e567d75e-4b42-4184-a487-498d6e3e86f0-
> I1204 03:31:52.023661 31790 gc.cpp:54] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-'
> for gc 6.9972689778days in the future
> [   OK ] ContentType/SchedulerTest.Message/1 (675 ms)
> [ RUN  ] ContentType/SchedulerTest.Request/0
> I1204 03:31:52.124193 31769 leveldb.cpp:174] Opened db in 94.016525ms
> I1204 03:31:52.149694 31769 leveldb.cpp:181] Compacted db in 25.458791ms
> I1204 03:31:52.149760 31769 leveldb.cpp:196] Created db iterator in 20834ns
> I1204 03:31:52.149780 31769 leveldb.cpp:202] Seeked to beginning of db in
> 1977ns
> I1204 03:31:52.149791 31769 leveldb.cpp:271] Iterated through 0 keys in
> the db in 303ns
> I1204 03:31:52.149832 31769 replica.cpp:778] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I1204 03:31:52.150544 31791 recover.cpp:447] Starting replica recovery
> I1204 03:31:52.151046 31791 recover.cpp:473] Replica is in EMPTY status
> I1204 03:31:52.152299 31802 replica.cpp:674] Replica in EMPTY status
> received a broadcasted recover request from (11482)@172.17.0.2:47200
> I1204 03:31:52.152914 31801 recover.cpp:193] Received a recover response
> from a replica in EMPTY status
> I1204 03:31:52.153547 31799 recover.cpp:564] Updating replica status to
> STARTING
> I1204 03:31:52.153744 31803 master.cpp:365] Master
> 0b57d640-d54a-4ef7-b6c0-df2eaa52b0fe (403686b83bbb) started on
> 172.17.0.2:47200
> I1204 03:31:52.153775 31803 master.cpp:367] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_slaves="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/0wBPVf/credentials" --framework_sorter="drf"
> --help="false" --hostname_lookup="true" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_slave_ping_timeouts="5" --quiet="false"
> --recovery_slave_removal_limit="100%" --registry="replicated_log"
> --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
> --registry_strict="true" --root_submissions="true"
> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
> --user_sorter="drf" --version="false"
> --webui_dir="/mesos/mesos-0.27.0/_inst/share/mesos/webui"
> --work_dir="/tmp/0wBPVf/master" --zk_session_timeout="10secs"
> I1204 03:31:52.154326 31803 master.cpp:414] Master allowing
> unauthenticated frameworks to register
> I1204 03:31:52.154345 31803 master.cpp:417] Master only allowing
> authenticated slaves to register
> I1204 03:31:52.154361 31803 credentials.hpp:35] Loading credentials for
> authentication from '/tmp/0wBPVf/credentials'
> I1204 03:31:52.154675 31803 master.cpp:456] Using default 'crammd5'
> authenticator
> I1204 03:31:52.154832 31803 master.cpp:493] Authorization enabled
> I1204 03:31:52.155102 31788 whitelist_watcher.cpp:77] No whitelist given
> I1204 03:31:52.155231 31796 hierarchical.cpp:163] Initialized hierarchical
> allocator process
> I1204 03:31:52.157017 31791 master.cpp:1637] The newly elected leader is
> master@172.17.0.2:47200 with id 0b57d640-d54a-4ef7-b6c0-df2eaa52b0fe
> I1204 03:31:52.157053 31791 master.cpp:1650] Elected as the leading master!
> I1204 03:31:52.157073 31791 master.cpp:1395] Recovering from registrar
> I1204 03:31:52.157237 31796 registrar.cpp:307] Recovering registrar
> I1204 03:31:52.186647 31798 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #1319

2015-12-03 Thread Benjamin Mahler
Got it. Who shepherded it?

On Thu, Dec 3, 2015 at 7:38 PM, Michael Park <mp...@mesosphere.io> wrote:

> +Greg since he's the author, he might have a better idea
>
> On Thu, Dec 3, 2015, 10:36 PM Benjamin Mahler <benjamin.mah...@gmail.com>
> wrote:
>
>> Michael can you please triage this test?
>>
>> On Thu, Dec 3, 2015 at 7:31 PM, Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>>
>>> See <
>>> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1319/changes
>>> >
>>>
>>> Changes:
>>>
>>> [benjamin.mahler] Documented why OsSignalsTest.Suppress works on OS X.
>>>
>>> --
>>> [...truncated 151042 lines...]
>>> I1204 03:31:52.023347 31790 gc.cpp:54] Scheduling
>>> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-/executors/default/runs/ad6c9ae3-14b4-4ae0-85e0-2b66874c1d0f'
>>> for gc 6.9973126222days in the future
>>> I1204 03:31:52.023437 31792 slave.cpp:3773] Cleaning up framework
>>> e567d75e-4b42-4184-a487-498d6e3e86f0-
>>> I1204 03:31:52.023515 31790 gc.cpp:54] Scheduling
>>> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-/executors/default'
>>> for gc 6.9972944593days in the future
>>> I1204 03:31:52.023558 31794 status_update_manager.cpp:282] Closing
>>> status update streams for framework
>>> e567d75e-4b42-4184-a487-498d6e3e86f0-
>>> I1204 03:31:52.023623 31794 status_update_manager.cpp:528] Cleaning up
>>> status update stream for task 3ee27512-7308-4ddf-8ab0-eb17196ad555 of
>>> framework e567d75e-4b42-4184-a487-498d6e3e86f0-
>>> I1204 03:31:52.023661 31790 gc.cpp:54] Scheduling
>>> '/tmp/ContentType_SchedulerTest_Message_1_jkcDGA/slaves/e567d75e-4b42-4184-a487-498d6e3e86f0-S0/frameworks/e567d75e-4b42-4184-a487-498d6e3e86f0-'
>>> for gc 6.9972689778days in the future
>>> [   OK ] ContentType/SchedulerTest.Message/1 (675 ms)
>>> [ RUN  ] ContentType/SchedulerTest.Request/0
>>> I1204 03:31:52.124193 31769 leveldb.cpp:174] Opened db in 94.016525ms
>>> I1204 03:31:52.149694 31769 leveldb.cpp:181] Compacted db in 25.458791ms
>>> I1204 03:31:52.149760 31769 leveldb.cpp:196] Created db iterator in
>>> 20834ns
>>> I1204 03:31:52.149780 31769 leveldb.cpp:202] Seeked to beginning of db
>>> in 1977ns
>>> I1204 03:31:52.149791 31769 leveldb.cpp:271] Iterated through 0 keys in
>>> the db in 303ns
>>> I1204 03:31:52.149832 31769 replica.cpp:778] Replica recovered with log
>>> positions 0 -> 0 with 1 holes and 0 unlearned
>>> I1204 03:31:52.150544 31791 recover.cpp:447] Starting replica recovery
>>> I1204 03:31:52.151046 31791 recover.cpp:473] Replica is in EMPTY status
>>> I1204 03:31:52.152299 31802 replica.cpp:674] Replica in EMPTY status
>>> received a broadcasted recover request from (11482)@172.17.0.2:47200
>>> I1204 03:31:52.152914 31801 recover.cpp:193] Received a recover response
>>> from a replica in EMPTY status
>>> I1204 03:31:52.153547 31799 recover.cpp:564] Updating replica status to
>>> STARTING
>>> I1204 03:31:52.153744 31803 master.cpp:365] Master
>>> 0b57d640-d54a-4ef7-b6c0-df2eaa52b0fe (403686b83bbb) started on
>>> 172.17.0.2:47200
>>> I1204 03:31:52.153775 31803 master.cpp:367] Flags at startup: --acls=""
>>> --allocation_interval="1secs" --allocator="HierarchicalDRF"
>>> --authenticate="false" --authenticate_slaves="true"
>>> --authenticators="crammd5" --authorizers="local"
>>> --credentials="/tmp/0wBPVf/credentials" --framework_sorter="drf"
>>> --help="false" --hostname_lookup="true" --initialize_driver_logging="true"
>>> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
>>> --max_slave_ping_timeouts="5" --quiet="false"
>>> --recovery_slave_removal_limit="100%" --registry="replicated_log"
>>> --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
>>> --registry_strict="true" --root_submissions="true"
>>> --slave_ping_time

Re: Build failed in Jenkins: Mesos » gcc,--verbose,ubuntu:14.04,docker||Hadoop #1298

2015-12-03 Thread Benjamin Mahler
Thank you sir!

On Thu, Dec 3, 2015 at 7:45 PM, Michael Park <mp...@mesosphere.io> wrote:

> Yep, I'll look into it once I get back to a computer.
>
> On Thu, Dec 3, 2015, 10:44 PM Benjamin Mahler <benjamin.mah...@gmail.com>
> wrote:
>
>> Michael, this looks like https://issues.apache.org/jira/browse/MESOS-3907?
>> Can you please triage it?
>>
>> Tim, I've added details to
>> https://issues.apache.org/jira/browse/MESOS-4024. Can you please triage
>> it? Also, the test takes 16 seconds to finish? :(
>>
>> On Tue, Dec 1, 2015 at 5:13 AM, Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>>
>>> See <
>>> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/1298/changes
>>> >
>>>
>>> Changes:
>>>
>>> [mpark] Introduced filter for non-revocable resources.
>>>
>>> [mpark] Updated codebase to use `nonRevocable()` where appropriate.
>>>
>>> --
>>> [...truncated 150475 lines...]
>>> I1201 13:13:39.133826 30312 gc.cpp:54] Scheduling
>>> '/tmp/ContentType_SchedulerTest_Message_1_TaiosT/slaves/4ee094e8-02a9-49de-b684-4d65e1dbab3a-S0/frameworks/4ee094e8-02a9-49de-b684-4d65e1dbab3a-/executors/default'
>>> for gc 6.984545837days in the future
>>> I1201 13:13:39.133921 30312 gc.cpp:54] Scheduling
>>> '/tmp/ContentType_SchedulerTest_Message_1_TaiosT/slaves/4ee094e8-02a9-49de-b684-4d65e1dbab3a-S0/frameworks/4ee094e8-02a9-49de-b684-4d65e1dbab3a-'
>>> for gc 6.9845272days in the future
>>> I1201 13:13:39.134052 30312 status_update_manager.cpp:282] Closing
>>> status update streams for framework
>>> 4ee094e8-02a9-49de-b684-4d65e1dbab3a-
>>> I1201 13:13:39.134109 30312 status_update_manager.cpp:528] Cleaning up
>>> status update stream for task df889fba-4ec5-48a2-bb6a-b82e50d823ee of
>>> framework 4ee094e8-02a9-49de-b684-4d65e1dbab3a-
>>> [   OK ] ContentType/SchedulerTest.Message/1 (620 ms)
>>> [ RUN  ] ContentType/SchedulerTest.Request/0
>>> I1201 13:13:39.249682 30288 leveldb.cpp:174] Opened db in 108.293757ms
>>> I1201 13:13:39.286597 30288 leveldb.cpp:181] Compacted db in 36.820837ms
>>> I1201 13:13:39.286878 30288 leveldb.cpp:196] Created db iterator in
>>> 28745ns
>>> I1201 13:13:39.287019 30288 leveldb.cpp:202] Seeked to beginning of db
>>> in 4075ns
>>> I1201 13:13:39.287135 30288 leveldb.cpp:271] Iterated through 0 keys in
>>> the db in 506ns
>>> I1201 13:13:39.287328 30288 replica.cpp:778] Replica recovered with log
>>> positions 0 -> 0 with 1 holes and 0 unlearned
>>> I1201 13:13:39.288477 30322 recover.cpp:447] Starting replica recovery
>>> I1201 13:13:39.288979 30322 recover.cpp:473] Replica is in EMPTY status
>>> I1201 13:13:39.291091 30307 replica.cpp:674] Replica in EMPTY status
>>> received a broadcasted recover request from (11259)@172.17.21.0:52024
>>> I1201 13:13:39.292063 30322 recover.cpp:193] Received a recover response
>>> from a replica in EMPTY status
>>> I1201 13:13:39.292816 30322 recover.cpp:564] Updating replica status to
>>> STARTING
>>> I1201 13:13:39.310650 30317 master.cpp:365] Master
>>> afc8b3bb-8b30-42f9-bb3d-e218e05c5758 (fa812f474cf4) started on
>>> 172.17.21.0:52024
>>> I1201 13:13:39.310979 30317 master.cpp:367] Flags at startup: --acls=""
>>> --allocation_interval="1secs" --allocator="HierarchicalDRF"
>>> --authenticate="false" --authenticate_slaves="true"
>>> --authenticators="crammd5" --authorizers="local"
>>> --credentials="/tmp/ZHbR13/credentials" --framework_sorter="drf"
>>> --help="false" --hostname_lookup="true" --initialize_driver_logging="true"
>>> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
>>> --max_slave_ping_timeouts="5" --quiet="false"
>>> --recovery_slave_removal_limit="100%" --registry="replicated_log"
>>> --registry_fetch_timeout="1mins" --registry_store_timeout="25secs"
>>> --registry_strict="true" --root_submissions="true"
>>> --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
>>> --user_sorter="drf" --version="false"
>>> --webui_dir="/mesos/mesos-0.27.0/_inst/share

Re: Build failed in Jenkins: mesos-reviewbot #9787

2015-11-20 Thread Benjamin Mahler
Hey Vinod / Jie,

It appears that these recent failures are from the review here
https://reviews.apache.org/r/40418/. It looks like because
verify_reviews.py hits a "500 Internal Server Error" it barfs out and
emails the list. Unfortunately, this ends up being a bit misleading in this
case because it looks at first glance like the build is failing (I was
tripped up by the CurlFetcherPluginTest failures above).

Any ideas on how to make the reviewbot failures less misleading?

On Thu, Nov 19, 2015 at 11:52 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> --
> [...truncated 148679 lines...]
> I1119 22:51:39.306417 30230 replica.cpp:323] Persisted replica status to
> VOTING
> I1119 22:51:39.306602 30229 recover.cpp:580] Successfully joined the Paxos
> group
> I1119 22:51:39.306910 30229 recover.cpp:464] Recover process terminated
> I1119 22:51:39.307488 30231 log.cpp:661] Attempting to start the writer
> I1119 22:51:39.309092 30227 replica.cpp:496] Replica received implicit
> promise request from (10553)@172.17.19.74:52056 with proposal 1
> I1119 22:51:39.309515 30227 leveldb.cpp:306] Persisting metadata (8 bytes)
> to leveldb took 376214ns
> I1119 22:51:39.309556 30227 replica.cpp:345] Persisted promised to 1
> I1119 22:51:39.310180 30221 coordinator.cpp:240] Coordinator attempting to
> fill missing positions
> I1119 22:51:39.311311 30224 replica.cpp:391] Replica received explicit
> promise request from (10554)@172.17.19.74:52056 for position 0 with
> proposal 2
> I1119 22:51:39.311645 30224 leveldb.cpp:343] Persisting action (8 bytes)
> to leveldb took 296670ns
> I1119 22:51:39.311667 30224 replica.cpp:715] Persisted action at 0
> I1119 22:51:39.312527 30231 replica.cpp:540] Replica received write
> request for position 0 from (10555)@172.17.19.74:52056
> I1119 22:51:39.312585 30231 leveldb.cpp:438] Reading position from leveldb
> took 26213ns
> I1119 22:51:39.312921 30231 leveldb.cpp:343] Persisting action (14 bytes)
> to leveldb took 288780ns
> I1119 22:51:39.312943 30231 replica.cpp:715] Persisted action at 0
> I1119 22:51:39.313567 30220 replica.cpp:694] Replica received learned
> notice for position 0 from @0.0.0.0:0
> I1119 22:51:39.313930 30220 leveldb.cpp:343] Persisting action (16 bytes)
> to leveldb took 332743ns
> I1119 22:51:39.313953 30220 replica.cpp:715] Persisted action at 0
> I1119 22:51:39.313969 30220 replica.cpp:700] Replica learned NOP action at
> position 0
> I1119 22:51:39.314502 30224 log.cpp:677] Writer started with ending
> position 0
> I1119 22:51:39.315490 30220 leveldb.cpp:438] Reading position from leveldb
> took 24579ns
> I1119 22:51:39.316335 30229 registrar.cpp:342] Successfully fetched the
> registry (0B) in 10.753792ms
> I1119 22:51:39.316457 30229 registrar.cpp:441] Applied 1 operations in
> 27472ns; attempting to update the 'registry'
> I1119 22:51:39.317178 30222 log.cpp:685] Attempting to append 176 bytes to
> the log
> I1119 22:51:39.317297 30223 coordinator.cpp:350] Coordinator attempting to
> write APPEND action at position 1
> I1119 22:51:39.318049 30219 replica.cpp:540] Replica received write
> request for position 1 from (10556)@172.17.19.74:52056
> I1119 22:51:39.318389 30219 leveldb.cpp:343] Persisting action (195 bytes)
> to leveldb took 300611ns
> I1119 22:51:39.318411 30219 replica.cpp:715] Persisted action at 1
> I1119 22:51:39.318946 30222 replica.cpp:694] Replica received learned
> notice for position 1 from @0.0.0.0:0
> I1119 22:51:39.319262 30222 leveldb.cpp:343] Persisting action (197 bytes)
> to leveldb took 282235ns
> I1119 22:51:39.319286 30222 replica.cpp:715] Persisted action at 1
> I1119 22:51:39.319311 30222 replica.cpp:700] Replica learned APPEND action
> at position 1
> I1119 22:51:39.320333 30226 registrar.cpp:486] Successfully updated the
> 'registry' in 3.81696ms
> I1119 22:51:39.320457 30226 registrar.cpp:372] Successfully recovered
> registrar
> I1119 22:51:39.320579 30220 log.cpp:704] Attempting to truncate the log to
> 1
> I1119 22:51:39.320813 30226 coordinator.cpp:350] Coordinator attempting to
> write TRUNCATE action at position 2
> I1119 22:51:39.320878 30227 master.cpp:1422] Recovered 0 slaves from the
> Registry (137B) ; allowing 10mins for slaves to re-register
> I1119 22:51:39.321463 30233 replica.cpp:540] Replica received write
> request for position 2 from (10557)@172.17.19.74:52056
> I1119 22:51:39.321768 30233 leveldb.cpp:343] Persisting action (16 bytes)
> to leveldb took 256133ns
> I1119 22:51:39.321789 30233 replica.cpp:715] Persisted action at 2
> I1119 22:51:39.322340 30226 replica.cpp:694] Replica received learned
> notice for position 2 from @0.0.0.0:0
> I1119 22:51:39.322696 30226 leveldb.cpp:343] Persisting action (18 bytes)
> to leveldb took 325723ns
> I1119 22:51:39.322742 30226 leveldb.cpp:401] Deleting ~1 keys from leveldb
> took 24686ns
> I1119 22:51:39.322764 30226 replica.cpp:715] Persisted 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,ubuntu:14.04,docker||Hadoop #921

2015-10-13 Thread Benjamin Mahler
Hm.. can you point me to how RST can be caused by memory pressure? Also, if
that's really the cause, is there a reason we've only ever seen it with the
docker registry client test?

On Tue, Oct 13, 2015 at 2:22 PM, Jojy Varghese <j...@mesosphere.io> wrote:

> Could not reproduce. Wondering if this is a one-off due to system issues.
> RST could be caused by system memory pressure.
>
> -Jojy
>
> On Oct 13, 2015, at 1:24 PM, Benjamin Mahler <benjamin.mah...@gmail.com>
> wrote:
>
> +tim, jojy
>
> Could you guys triage this test failure?
>
> On Tue, Oct 13, 2015 at 2:54 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <
>> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/921/changes
>> <https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=ubuntu:14.04,label_exp=docker%7C%7CHadoop/921/changes>
>> >
>>
>> Changes:
>>
>> [mpark] Changed secret field in `Credential` from `bytes` to `string`
>>
>> --
>> [...truncated 137652 lines...]
>> I1013 09:54:28.553485 30601 slave.cpp:3721] Cleaning up framework
>> b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-
>> I1013 09:54:28.553521 30599 gc.cpp:56] Scheduling
>> '/tmp/ContentType_SchedulerTest_Message_1_wRAnsK/slaves/b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-S0/frameworks/b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-/executors/default'
>> for gc 6.9359463111days in the future
>> I1013 09:54:28.553673 30604 status_update_manager.cpp:284] Closing status
>> update streams for framework b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-
>> I1013 09:54:28.553737 30606 gc.cpp:56] Scheduling
>> '/tmp/ContentType_SchedulerTest_Message_1_wRAnsK/slaves/b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-S0/frameworks/b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-'
>> for gc 6.9359203556days in the future
>> I1013 09:54:28.553738 30604 status_update_manager.cpp:530] Cleaning up
>> status update stream for task e99b57ba-21f7-48e7-a5ca-6b30bf56941d of
>> framework b85198c2-25fb-4a7f-8fa7-4e11bf7dbbd1-
>> [   OK ] ContentType/SchedulerTest.Message/1 (640 ms)
>> [ RUN  ] ContentType/SchedulerTest.Request/0
>> Using temporary directory
>> '/tmp/ContentType_SchedulerTest_Request_0_n99C9B'
>> I1013 09:54:28.663244 30579 leveldb.cpp:176] Opened db in 104.020401ms
>> I1013 09:54:28.705338 30579 leveldb.cpp:183] Compacted db in 42.050441ms
>> I1013 09:54:28.705405 30579 leveldb.cpp:198] Created db iterator in
>> 20999ns
>> I1013 09:54:28.705427 30579 leveldb.cpp:204] Seeked to beginning of db in
>> 2390ns
>> I1013 09:54:28.705442 30579 leveldb.cpp:273] Iterated through 0 keys in
>> the db in 264ns
>> I1013 09:54:28.705483 30579 replica.cpp:746] Replica recovered with log
>> positions 0 -> 0 with 1 holes and 0 unlearned
>> I1013 09:54:28.705940 30601 recover.cpp:449] Starting replica recovery
>> I1013 09:54:28.706272 30601 recover.cpp:475] Replica is in EMPTY status
>> I1013 09:54:28.707350 30609 replica.cpp:642] Replica in EMPTY status
>> received a broadcasted recover request from (9953)@172.17.6.112:56612
>> I1013 09:54:28.707788 30600 recover.cpp:195] Received a recover response
>> from a replica in EMPTY status
>> I1013 09:54:28.708353 30604 recover.cpp:566] Updating replica status to
>> STARTING
>> I1013 09:54:28.709435 30607 master.cpp:376] Master
>> 7a1c3c9c-ef37-490d-90df-63aab52ad653 (e582ef081920) started on
>> 172.17.6.112:56612
>> I1013 09:54:28.709467 30607 master.cpp:378] Flags at startup: --acls=""
>> --allocation_interval="1secs" --allocator="HierarchicalDRF"
>> --authenticate="false" --authenticate_slaves="true"
>> --authenticators="crammd5" --authorizers="local"
>> --credentials="/tmp/ContentType_SchedulerTest_Request_0_n99C9B/credentials"
>> --framework_sorter="drf" --help="false" --hostname_lookup="true"
>> --initialize_driver_logging="true" --log_auto_initialize="true"
>> --logbufsecs="0" --logging_level="INFO" --max_slave_ping_timeouts="5"
>> --quiet="false" --recovery_slave_removal_limit="100%"
>> --registry="replicated_log" --registry_fetch_timeout="1mins"
>> --registry_store_timeout="25secs" --registry_strict="true"
>> --root_submissions="true" --slave_ping_timeout="15secs"
>> --slave_

Re: Build failed in Jenkins: Mesos » clang,--verbose,ubuntu:14.04,docker||Hadoop #923

2015-10-13 Thread Benjamin Mahler
+anand

Looks like the ExamplesTest.EventCallFramework was running forever here.

On Tue, Oct 13, 2015 at 5:18 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=clang,CONFIGURATION=--verbose,OS=ubuntu%3A14.04,label_exp=docker%7C%7CHadoop/923/changes
> >
>
> Changes:
>
> [niklas] Added a new callback enabling custom attribute discovery logic.
>
> --
> [...truncated 87182 lines...]
> I1014 00:18:32.566576 28546 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 715547ns
> I1014 00:18:32.704258 28540 slave.cpp:4372] Querying resource estimator
> for oversubscribable resources
> I1014 00:18:32.704535 28536 slave.cpp:4386] Received oversubscribable
> resources  from the resource estimator
> I1014 00:18:32.707561 28533 slave.cpp:4372] Querying resource estimator
> for oversubscribable resources
> I1014 00:18:32.707607 28543 slave.cpp:4372] Querying resource estimator
> for oversubscribable resources
> I1014 00:18:32.707794 28542 slave.cpp:4386] Received oversubscribable
> resources  from the resource estimator
> I1014 00:18:32.707923 28541 slave.cpp:4386] Received oversubscribable
> resources  from the resource estimator
> I1014 00:18:33.054343 28543 slave.cpp:3205] Received ping from
> slave-observer(1)@172.17.3.132:38365
> I1014 00:18:33.094724 28542 slave.cpp:3205] Received ping from
> slave-observer(2)@172.17.3.132:38365
> I1014 00:18:33.095901 28539 slave.cpp:3205] Received ping from
> slave-observer(3)@172.17.3.132:38365
> I1014 00:18:33.568006 28542 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:33.568105 28542 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:33.568136 28542 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 575515ns
> I1014 00:18:34.569293 28545 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:34.569383 28545 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:34.569416 28545 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 760022ns
> I1014 00:18:35.570439 28546 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:35.570508 28546 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:35.570533 28546 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 633023ns
> I1014 00:18:36.571583 28540 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:36.571681 28540 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:36.571714 28540 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 733712ns
> I1014 00:18:37.572648 28543 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:37.572739 28543 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:37.572774 28543 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 599820ns
> I1014 00:18:38.573972 28541 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:38.574092 28541 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:38.574131 28541 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 820042ns
> I1014 00:18:39.575865 28538 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:39.575944 28538 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:39.575986 28538 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 638676ns
> I1014 00:18:40.577955 28534 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:40.578085 28534 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:40.578132 28534 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 899361ns
> I1014 00:18:41.579546 28532 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:41.579607 28532 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:41.579632 28532 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 594930ns
> I1014 00:18:42.580958 28535 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:42.581084 28535 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:42.581123 28535 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 870977ns
> I1014 00:18:43.582335 28536 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:43.582433 28536 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:43.582468 28536 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 756692ns
> I1014 00:18:44.583494 28539 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:44.583580 28539 hierarchical.cpp:1045] No inverse offers to
> send out!
> I1014 00:18:44.583606 28539 hierarchical.cpp:851] Performed allocation for
> 3 slaves in 596706ns
> I1014 00:18:45.584910 28542 hierarchical.cpp:952] No resources available
> to allocate!
> I1014 00:18:45.585011 28542 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,centos:7,docker||Hadoop #890

2015-10-05 Thread Benjamin Mahler
Fixed in: https://issues.apache.org/jira/browse/MESOS-3577

On Sat, Oct 3, 2015 at 1:31 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=centos%3A7,label_exp=docker%7C%7CHadoop/890/changes
> >
>
> Changes:
>
> [joris.van.remoortere] Generate Java V1 Protobufs.
>
> --
> [...truncated 134684 lines...]
> I1003 20:31:26.935798 31568 slave.cpp:3544] Cleaning up executor 'default'
> of framework 224019a0-d443-4ff2-96b4-10a11c17aa3d-
> I1003 20:31:26.936024 31571 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_kDA90t/slaves/224019a0-d443-4ff2-96b4-10a11c17aa3d-S0/frameworks/224019a0-d443-4ff2-96b4-10a11c17aa3d-/executors/default/runs/b75fab2e-8565-40d0-9032-859fed21d2e3'
> for gc 6.8916746667days in the future
> I1003 20:31:26.936125 31568 slave.cpp:3633] Cleaning up framework
> 224019a0-d443-4ff2-96b4-10a11c17aa3d-
> I1003 20:31:26.936184 31571 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_kDA90t/slaves/224019a0-d443-4ff2-96b4-10a11c17aa3d-S0/frameworks/224019a0-d443-4ff2-96b4-10a11c17aa3d-/executors/default'
> for gc 6.8916576296days in the future
> I1003 20:31:26.936226 31577 status_update_manager.cpp:284] Closing status
> update streams for framework 224019a0-d443-4ff2-96b4-10a11c17aa3d-
> I1003 20:31:26.936396 31571 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_kDA90t/slaves/224019a0-d443-4ff2-96b4-10a11c17aa3d-S0/frameworks/224019a0-d443-4ff2-96b4-10a11c17aa3d-'
> for gc 6.8916395852days in the future
> I1003 20:31:26.936451 31577 status_update_manager.cpp:530] Cleaning up
> status update stream for task b3ee7750-63ce-4b9a-917e-04b317621a75 of
> framework 224019a0-d443-4ff2-96b4-10a11c17aa3d-
> [   OK ] ContentType/SchedulerTest.Message/1 (105 ms)
> [ RUN  ] ContentType/SchedulerTest.Request/0
> Using temporary directory '/tmp/ContentType_SchedulerTest_Request_0_9KBzRb'
> I1003 20:31:26.943343 31549 leveldb.cpp:176] Opened db in 2.643266ms
> I1003 20:31:26.944468 31549 leveldb.cpp:183] Compacted db in 1.08934ms
> I1003 20:31:26.944531 31549 leveldb.cpp:198] Created db iterator in 21933ns
> I1003 20:31:26.944548 31549 leveldb.cpp:204] Seeked to beginning of db in
> 1962ns
> I1003 20:31:26.944557 31549 leveldb.cpp:273] Iterated through 0 keys in
> the db in 391ns
> I1003 20:31:26.944602 31549 replica.cpp:744] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I1003 20:31:26.945060 31571 recover.cpp:449] Starting replica recovery
> I1003 20:31:26.945502 31571 recover.cpp:475] Replica is in EMPTY status
> I1003 20:31:26.946650 31571 replica.cpp:641] Replica in EMPTY status
> received a broadcasted recover request
> I1003 20:31:26.947477 31577 recover.cpp:195] Received a recover response
> from a replica in EMPTY status
> I1003 20:31:26.947746 31581 master.cpp:376] Master
> 94d2511e-eee6-4866-8e56-974c3005eaf6 (9efc27440ed0) started on
> 172.17.5.73:38504
> I1003 20:31:26.947770 31581 master.cpp:378] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_slaves="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/ContentType_SchedulerTest_Request_0_9KBzRb/credentials"
> --framework_sorter="drf" --help="false" --hostname_lookup="true"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_slave_ping_timeouts="5"
> --quiet="false" --recovery_slave_removal_limit="100%"
> --registry="replicated_log" --registry_fetch_timeout="1mins"
> --registry_store_timeout="25secs" --registry_strict="true"
> --root_submissions="true" --slave_ping_timeout="15secs"
> --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false"
> --webui_dir="/mesos/mesos-0.26.0/_inst/share/mesos/webui"
> --work_dir="/tmp/ContentType_SchedulerTest_Request_0_9KBzRb/master"
> --zk_session_timeout="10secs"
> I1003 20:31:26.948117 31581 master.cpp:425] Master allowing
> unauthenticated frameworks to register
> I1003 20:31:26.948132 31581 master.cpp:428] Master only allowing
> authenticated slaves to register
> I1003 20:31:26.948143 31581 credentials.hpp:37] Loading credentials for
> authentication from
> '/tmp/ContentType_SchedulerTest_Request_0_9KBzRb/credentials'
> I1003 20:31:26.948341 31576 recover.cpp:566] Updating replica status to
> STARTING
> I1003 20:31:26.948417 31581 master.cpp:467] Using default 'crammd5'
> authenticator
> I1003 20:31:26.948561 31581 master.cpp:504] Authorization enabled
> I1003 20:31:26.948895 31573 whitelist_watcher.cpp:79] No whitelist given
> I1003 20:31:26.949038 31570 hierarchical.hpp:468] Initialized hierarchical
> allocator process
> I1003 20:31:26.949053 31577 leveldb.cpp:306] Persisting metadata (8 bytes)
> to leveldb took 577328ns
> I1003 

Re: Build failed in Jenkins: Mesos » gcc,--verbose --enable-libevent --enable-ssl,centos:7,docker||Hadoop #755

2015-08-31 Thread Benjamin Mahler
+bernd

Re-opened https://issues.apache.org/jira/browse/MESOS-2858

On Mon, Aug 31, 2015 at 6:33 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/Mesos/COMPILER=gcc,CONFIGURATION=--verbose%20--enable-libevent%20--enable-ssl,OS=centos%3A7,label_exp=docker%7C%7CHadoop/755/changes
> >
>
> Changes:
>
> [benjamin.mahler] Fixed a non-plural variable name in perf.hpp.
>
> --
> [...truncated 121858 lines...]
> I0901 01:34:13.515564 30509 hierarchical.hpp:428] Removed framework
> 20150901-013413-1996493228-35897-30488-
> E0901 01:34:13.516106 30509 scheduler.cpp:435] End-Of-File received from
> master. The master closed the event stream
> I0901 01:34:13.516243 30515 slave.cpp:3143] master@172.17.0.119:35897
> exited
> W0901 01:34:13.516481 30515 slave.cpp:3146] Master disconnected! Waiting
> for a new master to be elected
> I0901 01:34:13.519942 30508 slave.cpp:3399] Executor 'default' of
> framework 20150901-013413-1996493228-35897-30488- exited with status 0
> I0901 01:34:13.521723 30508 slave.cpp:2696] Handling status update
> TASK_LOST (UUID: 80a7ec7c-4558-4c42-8a68-9b282052b4a5) for task
> d5dad94b-5a73-4b32-9d3f-5432e6b6ad3c of framework
> 20150901-013413-1996493228-35897-30488- from @0.0.0.0:0
> I0901 01:34:13.521848 30508 slave.cpp:5094] Terminating task
> d5dad94b-5a73-4b32-9d3f-5432e6b6ad3c
> I0901 01:34:13.522336 30508 slave.cpp:564] Slave terminating
> I0901 01:34:13.522413 30508 slave.cpp:1959] Asked to shut down framework
> 20150901-013413-1996493228-35897-30488- by @0.0.0.0:0
> I0901 01:34:13.522534 30508 slave.cpp:1984] Shutting down framework
> 20150901-013413-1996493228-35897-30488-
> I0901 01:34:13.522689 30508 slave.cpp:3503] Cleaning up executor 'default'
> of framework 20150901-013413-1996493228-35897-30488-
> I0901 01:34:13.523030 30509 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_R2AKVz/slaves/20150901-013413-1996493228-35897-30488-S0/frameworks/20150901-013413-1996493228-35897-30488-/executors/default/runs/fd926952-f7cf-40c8-be62-bc34289b89cf'
> for gc 6.9394757037days in the future
> I0901 01:34:13.523133 30508 slave.cpp:3592] Cleaning up framework
> 20150901-013413-1996493228-35897-30488-
> I0901 01:34:13.523282 30509 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_R2AKVz/slaves/20150901-013413-1996493228-35897-30488-S0/frameworks/20150901-013413-1996493228-35897-30488-/executors/default'
> for gc 6.9394579852days in the future
> I0901 01:34:13.523396 30518 status_update_manager.cpp:284] Closing status
> update streams for framework 20150901-013413-1996493228-35897-30488-
> I0901 01:34:13.524374 30518 status_update_manager.cpp:530] Cleaning up
> status update stream for task d5dad94b-5a73-4b32-9d3f-5432e6b6ad3c of
> framework 20150901-013413-1996493228-35897-30488-
> I0901 01:34:13.523474 30509 gc.cpp:56] Scheduling
> '/tmp/ContentType_SchedulerTest_Message_1_R2AKVz/slaves/20150901-013413-1996493228-35897-30488-S0/frameworks/20150901-013413-1996493228-35897-30488-'
> for gc 6.9394276741days in the future
> [   OK ] ContentType/SchedulerTest.Message/1 (125 ms)
> [ RUN  ] ContentType/SchedulerTest.Request/0
> Using temporary directory '/tmp/ContentType_SchedulerTest_Request_0_vR0XnP'
> I0901 01:34:13.531173 30488 leveldb.cpp:176] Opened db in 2.561692ms
> I0901 01:34:13.532003 30488 leveldb.cpp:183] Compacted db in 783277ns
> I0901 01:34:13.532119 30488 leveldb.cpp:198] Created db iterator in 22761ns
> I0901 01:34:13.532143 30488 leveldb.cpp:204] Seeked to beginning of db in
> 2055ns
> I0901 01:34:13.532160 30488 leveldb.cpp:273] Iterated through 0 keys in
> the db in 472ns
> I0901 01:34:13.532212 30488 replica.cpp:744] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I0901 01:34:13.532742 30511 recover.cpp:449] Starting replica recovery
> I0901 01:34:13.532968 30511 recover.cpp:475] Replica is in EMPTY status
> I0901 01:34:13.534502 30508 replica.cpp:641] Replica in EMPTY status
> received a broadcasted recover request
> I0901 01:34:13.535284 30508 recover.cpp:195] Received a recover response
> from a replica in EMPTY status
> I0901 01:34:13.537603 30516 recover.cpp:566] Updating replica status to
> STARTING
> I0901 01:34:13.538120 30518 master.cpp:378] Master
> 20150901-013413-1996493228-35897-30488 (998e845ced4a) started on
> 172.17.0.119:35897
> I0901 01:34:13.538146 30518 master.cpp:380] Flags at startup: --acls=""
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate="false" --authenticate_slaves="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/ContentType_SchedulerTest_Request_0_vR0XnP/credentials"
> --framework_sorter="drf" --help="false" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_slave_ping_timeouts="5" --quiet="false"
> 

Re: Build failed in Jenkins: mesos-reviewbot #7547

2015-08-06 Thread Benjamin Mahler
Hm.. https://reviews.apache.org/r/37113/ was discarded 5 hours ago, is
reviewbot not ignoring discarded reviews?

On Thu, Aug 6, 2015 at 7:48 PM, Benjamin Mahler benjamin.mah...@gmail.com
wrote:

 403 forbidden looks like more of an auth issue.. I wonder if there is a
 bad review url that reviewbot is picking up.

 On Thu, Aug 6, 2015 at 6:27 PM, Michael Park mcyp...@gmail.com wrote:

 Mm... Still failing, maybe it wasn't the git mirror issue?

 On Thu, Aug 6, 2015, 5:46 PM Michael Park mcyp...@gmail.com wrote:

  Vinod, I think it was probably the git mirror issues that Jake just
 fixed.
 
  On Thu, Aug 6, 2015 at 5:44 PM Vinod Kone vinodk...@apache.org wrote:
 
  On Thu, Aug 6, 2015 at 5:25 PM, Apache Jenkins Server 
  jenk...@builds.apache.org wrote:
 
   Verifying review 37114
   Dependent review:
 https://reviews.apache.org/api/review-requests/37113/
  
 
  jake, looks like our review bot is always crashing on this review
 because
  it gets a 503? Can you take a look if something is funky on the RB
 side?
 
 





Re: Build failed in Jenkins: mesos-reviewbot #7547

2015-08-06 Thread Benjamin Mahler
403 forbidden looks like more of an auth issue.. I wonder if there is a bad
review url that reviewbot is picking up.

On Thu, Aug 6, 2015 at 6:27 PM, Michael Park mcyp...@gmail.com wrote:

 Mm... Still failing, maybe it wasn't the git mirror issue?

 On Thu, Aug 6, 2015, 5:46 PM Michael Park mcyp...@gmail.com wrote:

  Vinod, I think it was probably the git mirror issues that Jake just
 fixed.
 
  On Thu, Aug 6, 2015 at 5:44 PM Vinod Kone vinodk...@apache.org wrote:
 
  On Thu, Aug 6, 2015 at 5:25 PM, Apache Jenkins Server 
  jenk...@builds.apache.org wrote:
 
   Verifying review 37114
   Dependent review:
 https://reviews.apache.org/api/review-requests/37113/
  
 
  jake, looks like our review bot is always crashing on this review
 because
  it gets a 503? Can you take a look if something is funky on the RB side?
 
 



Re: Build failed in Jenkins: mesos-reviewbot #7415

2015-07-30 Thread Benjamin Mahler
Pushed a fix, looks like the namespace clause was missed.

On Thu, Jul 30, 2015 at 5:12 PM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 See https://builds.apache.org/job/mesos-reviewbot/7415/

 --
 [URLTrigger] A change within the response URL invocation (log)
 Building remotely on ubuntu-6 (docker Ubuntu ubuntu) in workspace 
 https://builds.apache.org/job/mesos-reviewbot/ws/
   git rev-parse --is-inside-work-tree # timeout=10
 Fetching changes from the remote Git repository
   git config remote.origin.url
 https://git-wip-us.apache.org/repos/asf/mesos.git # timeout=10
 Fetching upstream changes from
 https://git-wip-us.apache.org/repos/asf/mesos.git
   git --version # timeout=10
   git fetch --tags --progress
 https://git-wip-us.apache.org/repos/asf/mesos.git
 +refs/heads/*:refs/remotes/origin/*
   git rev-parse origin/master^{commit} # timeout=10
 Checking out Revision 8cb59f3886cf3bfcb4f60648476155a768a837ec
 (origin/master)
   git config core.sparsecheckout # timeout=10
   git checkout -f 8cb59f3886cf3bfcb4f60648476155a768a837ec
   git rev-list 8cb59f3886cf3bfcb4f60648476155a768a837ec # timeout=10
   git tag -a -f -m Jenkins Build #7415 jenkins-mesos-reviewbot-7415 #
 timeout=10
 [mesos-reviewbot] $ /bin/bash -xe /tmp/hudson1603543522590838005.sh
 + export JAVA_HOME=/home/jenkins/tools/java/jdk1.6.0_20-64
 + JAVA_HOME=/home/jenkins/tools/java/jdk1.6.0_20-64
 + export
 PATH=/home/jenkins/tools/java/jdk1.6.0_20-64/bin:/home/jenkins/tools/java/latest1.6/bin:/home/jenkins/tools/java/latest1.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
 +
 PATH=/home/jenkins/tools/java/jdk1.6.0_20-64/bin:/home/jenkins/tools/java/latest1.6/bin:/home/jenkins/tools/java/latest1.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
 + export M2_HOME=/home/jenkins/tools/maven/latest
 + M2_HOME=/home/jenkins/tools/maven/latest
 + export
 PATH=/home/jenkins/tools/maven/latest/bin:/home/jenkins/tools/java/jdk1.6.0_20-64/bin:/home/jenkins/tools/java/latest1.6/bin:/home/jenkins/tools/java/latest1.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
 +
 PATH=/home/jenkins/tools/maven/latest/bin:/home/jenkins/tools/java/jdk1.6.0_20-64/bin:/home/jenkins/tools/java/latest1.6/bin:/home/jenkins/tools/java/latest1.6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
 + date
 Fri Jul 31 00:11:37 UTC 2015
 + chmod -R +w 3rdparty bin bootstrap CHANGELOG cmake CMakeLists.txt
 configure.ac Dockerfile docs Doxyfile include LICENSE m4 Makefile.am
 mesos.pc.in mpi NOTICE README.md src support
 + git clean -fdx
 + git reset --hard HEAD
 HEAD is now at 8cb59f3 Style change: Space after the ... in variadic
 templates.
 + ./support/mesos-style.py
 Checking 688 files using filter
 --filter=-,+build/class,+build/deprecated,+build/endif_comment,+readability/todo,+readability/namespace,+runtime/vlog,+whitespace/blank_line,+whitespace/comma,+whitespace/end_of_line,+whitespace/ending_newline,+whitespace/forcolon,+whitespace/indent,+whitespace/line_length,+whitespace/operators,+whitespace/semicolon,+whitespace/tab,+whitespace/todo
 3rdparty/libprocess/3rdparty/stout/include/stout/os/windows/fork.hpp:31:
 Redundant blank line at the end of a code block should be deleted.
 [whitespace/blank_line] [3]
 Total errors found: 1
 Build step 'Execute shell' marked build as failure



Re: Build failed in Jenkins: Mesos » clang,docker||Hadoop,ubuntu:14.10 #589

2015-07-28 Thread Benjamin Mahler
Should be fixed, sorry about that.

On Tue, Jul 28, 2015 at 2:03 PM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 See 
 https://builds.apache.org/job/Mesos/COMPILER=clang,LABEL=docker%7C%7CHadoop,OS=ubuntu%3A14.10/589/changes
 

 Changes:

 [benjamin.mahler] Updated Framework struct in master for the http api.

 --
 [...truncated 112737 lines...]
 I0728 21:03:45.447605 24805 leveldb.cpp:273] Iterated through 0 keys in
 the db in 6132ns
 I0728 21:03:45.447631 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.448478 24828 leveldb.cpp:306] Persisting metadata (8 bytes)
 to leveldb took 413750ns
 I0728 21:03:45.448510 24828 replica.cpp:323] Persisted replica status to
 VOTING
 I0728 21:03:45.451573 24805 leveldb.cpp:176] Opened db in 2.519775ms
 I0728 21:03:45.453737 24805 leveldb.cpp:183] Compacted db in 2.143507ms
 I0728 21:03:45.453789 24805 leveldb.cpp:198] Created db iterator in 26996ns
 I0728 21:03:45.453832 24805 leveldb.cpp:204] Seeked to beginning of db in
 34118ns
 I0728 21:03:45.453860 24805 leveldb.cpp:273] Iterated through 1 keys in
 the db in 22720ns
 I0728 21:03:45.453886 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.456171 24805 leveldb.cpp:176] Opened db in 2.186331ms
 I0728 21:03:45.458128 24805 leveldb.cpp:183] Compacted db in 1.945429ms
 I0728 21:03:45.458171 24805 leveldb.cpp:198] Created db iterator in 22709ns
 I0728 21:03:45.458210 24805 leveldb.cpp:204] Seeked to beginning of db in
 29524ns
 I0728 21:03:45.458251 24805 leveldb.cpp:273] Iterated through 1 keys in
 the db in 29724ns
 I0728 21:03:45.458284 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.458758 24833 recover.cpp:449] Starting replica recovery
 I0728 21:03:45.458998 24833 recover.cpp:475] Replica is in VOTING status
 I0728 21:03:45.459087 24833 recover.cpp:464] Recover process terminated
 I0728 21:03:45.460877 24833 registrar.cpp:313] Recovering registrar
 [   OK ] Strict/RegistrarTest.FetchTimeout/0 (43 ms)
 [ RUN  ] Strict/RegistrarTest.FetchTimeout/1
 Using temporary directory '/tmp/Strict_RegistrarTest_FetchTimeout_1_sBe9aE'
 I0728 21:03:45.486516 24805 leveldb.cpp:176] Opened db in 2.017962ms
 I0728 21:03:45.487241 24805 leveldb.cpp:183] Compacted db in 709649ns
 I0728 21:03:45.487269 24805 leveldb.cpp:198] Created db iterator in 14704ns
 I0728 21:03:45.487280 24805 leveldb.cpp:204] Seeked to beginning of db in
 4929ns
 I0728 21:03:45.487287 24805 leveldb.cpp:273] Iterated through 0 keys in
 the db in 3743ns
 I0728 21:03:45.487308 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.488216 24823 leveldb.cpp:306] Persisting metadata (8 bytes)
 to leveldb took 590346ns
 I0728 21:03:45.488253 24823 replica.cpp:323] Persisted replica status to
 VOTING
 I0728 21:03:45.491544 24805 leveldb.cpp:176] Opened db in 2.777297ms
 I0728 21:03:45.492388 24805 leveldb.cpp:183] Compacted db in 823691ns
 I0728 21:03:45.492434 24805 leveldb.cpp:198] Created db iterator in 25599ns
 I0728 21:03:45.492451 24805 leveldb.cpp:204] Seeked to beginning of db in
 7780ns
 I0728 21:03:45.492462 24805 leveldb.cpp:273] Iterated through 0 keys in
 the db in 6063ns
 I0728 21:03:45.492496 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.493363 24827 leveldb.cpp:306] Persisting metadata (8 bytes)
 to leveldb took 438494ns
 I0728 21:03:45.493393 24827 replica.cpp:323] Persisted replica status to
 VOTING
 I0728 21:03:45.496592 24805 leveldb.cpp:176] Opened db in 2.65ms
 I0728 21:03:45.498695 24805 leveldb.cpp:183] Compacted db in 2.084173ms
 I0728 21:03:45.498749 24805 leveldb.cpp:198] Created db iterator in 27727ns
 I0728 21:03:45.498788 24805 leveldb.cpp:204] Seeked to beginning of db in
 29654ns
 I0728 21:03:45.498849 24805 leveldb.cpp:273] Iterated through 1 keys in
 the db in 56558ns
 I0728 21:03:45.498889 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.501762 24805 leveldb.cpp:176] Opened db in 2.7236ms
 I0728 21:03:45.503856 24805 leveldb.cpp:183] Compacted db in 2.071776ms
 I0728 21:03:45.503900 24805 leveldb.cpp:198] Created db iterator in 19844ns
 I0728 21:03:45.503923 24805 leveldb.cpp:204] Seeked to beginning of db in
 18821ns
 I0728 21:03:45.503954 24805 leveldb.cpp:273] Iterated through 1 keys in
 the db in 29391ns
 I0728 21:03:45.503985 24805 replica.cpp:744] Replica recovered with log
 positions 0 - 0 with 1 holes and 0 unlearned
 I0728 21:03:45.504384 24823 recover.cpp:449] Starting replica recovery
 I0728 21:03:45.504724 24823 recover.cpp:475] Replica is in VOTING status
 I0728 21:03:45.504871 24823 recover.cpp:464] Recover process terminated
 I0728 21:03:45.506520 24832 registrar.cpp:313] Recovering