Looks like they've closed the ticket as invalid, in favor of having the
chair create the account.

I'll ping benh for this. See:
http://wiki.apache.org/general/Jenkins?action=show&redirect=Hudson#How_do_I_get_an_account


On Wed, Jun 19, 2013 at 12:51 PM, Benjamin Mahler <[email protected]>wrote:

> Thanks for the pointer! Created
> https://issues.apache.org/jira/browse/INFRA-6413
>
>
> On Tue, Jun 18, 2013 at 3:46 PM, Andy Konwinski 
> <[email protected]>wrote:
>
>> Ben,
>>
>> I just re-enabled our tests and it looks like they are not hitting these
>> github repo related failures any more, e.g. this one ran when I re-enabled
>> it
>>
>> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/1036/
>>
>> All committers should be able to get access to the Apache Jenkins
>> instance.
>> So, if you're interested, go ahead and request access -- you can see how I
>> got access at https://issues.apache.org/jira/browse/INFRA-4105 -- and
>> then
>> you can make these sort tweaks directly.
>>
>> Andy
>>
>>
>> On Mon, Jun 17, 2013 at 9:17 PM, Benjamin Mahler
>> <[email protected]>wrote:
>>
>> > Has this been re-enabled?
>> >
>> > +1000 for running our jenkins tests with --gtest_repeat=25
>> > --gtest_break_on_failure or the like..
>> >
>> >
>> > On Mon, Jun 10, 2013 at 1:41 PM, Vinod Kone <[email protected]> wrote:
>> >
>> > > looks like the mesos git repo wasn't (intermittently?) accessible
>> during
>> > > the weeekend?
>> > >
>> > >
>> > > stdout:
>> > > stderr: error: The requested URL returned error: 503 while accessing
>> > > https://git-wip-us.apache.org/repos/asf/incubator-mesos.git/info/refs
>> > > fatal: HTTP request failed
>> > >
>> > >         at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:897)
>> > >         at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:858)
>> > >         at hudson.plugins.git.GitAPI.fetch(GitAPI.java:200)
>> > >         at hudson.plugins.git.GitAPI.fetch(GitAPI.java:1105)
>> > >         at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:813)
>> > >         at hudson.plugins.git.GitSCM.access$100(GitSCM.java:72)
>> > >         at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:744)
>> > >         at hudson.plugins.git.GitSCM$1.invoke(GitSCM.java:731)
>> > >         at
>> hudson.FilePath$FileCallableWrapper.call(FilePath.java:2348)
>> > >         at hudson.remoting.UserRequest.perform(UserRequest.java:118)
>> > >         at hudson.remoting.UserRequest.perform(UserRequest.java:48)
>> > >         at hudson.remoting.Request$2.run(Request.java:326)
>> > >         at
>> > >
>> >
>> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
>> > >         at
>> > > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> > >         at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>> > >         at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> > >         at java.lang.Thread.run(Thread.java:679)
>> > >
>> > >
>> > >
>> > > I've also seen what looks like a manifestation of
>> > > https://issues.jenkins-ci.org/browse/JENKINS-10131 on some of the
>> > builds.
>> > >
>> > >
>> > >
>> > > @vinodkone
>> > >
>> > >
>> > > On Sat, Jun 8, 2013 at 8:41 PM, Vinod Kone <[email protected]> wrote:
>> > >
>> > > > disabled the jobs until we investigate the issue.
>> > > >
>> > > >
>> > > > @vinodkone
>> > > >
>> > > >
>> > > > On Sat, Jun 8, 2013 at 8:36 PM, Apache Jenkins Server <
>> > > > [email protected]> wrote:
>> > > >
>> > > >> See <
>> > > >>
>> > >
>> >
>> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/1213/
>> > > >> >
>> > > >>
>> > > >> ------------------------------------------
>> > > >> [...truncated 11817 lines...]
>> > > >> I0609 03:36:34.014621 31037 hierarchical_allocator_process.hpp:727]
>> > > >> Offering cpus=1; mem=512; ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0 to framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.014704 31037 hierarchical_allocator_process.hpp:727]
>> > > >> Offering cpus=4; mem=2048; ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-1 to framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.015167 31037 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 2 slaves in 572.853us
>> > > >> I0609 03:36:34.015203 31042 master.hpp:313] Adding offer
>> > > >> 201306090336-982172483-39386-30401-4 with resources cpus=4;
>> mem=2048;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-1
>> > > >> I0609 03:36:34.016006 31042 master.hpp:313] Adding offer
>> > > >> 201306090336-982172483-39386-30401-5 with resources cpus=1;
>> mem=512;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.016450 31042 master.cpp:1239] Sending 2 offers to
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.016978 31040 sched.cpp:287] Received 2 offers
>> > > >> I0609 03:36:34.017631 31040 sched.cpp:427] Stopping framework
>> > > >> '201306090336-982172483-39386-30401-0000'
>> > > >> I0609 03:36:34.017678 30401 master.cpp:385] Master terminating
>> > > >> I0609 03:36:34.018409 31036 slave.cpp:493] Slave asked to shut
>> down by
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.018412 30401 master.cpp:207] Shutting down master
>> > > >> I0609 03:36:34.019364 31035 hierarchical_allocator_process.hpp:412]
>> > > >> Deactivated framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.018851 31036 slave.cpp:448] Slave terminating
>> > > >> I0609 03:36:34.019381 30401 master.hpp:303] Removing task 0 with
>> > > >> resources cpus=2; mem=512 on slave
>> > 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.018434 31039 slave.cpp:493] Slave asked to shut
>> down by
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.020907 30401 master.hpp:323] Removing offer
>> > > >> 201306090336-982172483-39386-30401-5 with resources cpus=1;
>> mem=512;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.021800 30401 master.hpp:323] Removing offer
>> > > >> 201306090336-982172483-39386-30401-4 with resources cpus=4;
>> mem=2048;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-1
>> > > >> I0609 03:36:34.021257 31039 slave.cpp:1110] Asked to shut down
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 by
>> [email protected]:39386
>> > > >> I0609 03:36:34.022675 31039 slave.cpp:1135] Shutting down framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.023136 31039 slave.cpp:2312] Shutting down executor
>> > > >> 'default' of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.023627 31038 exec.cpp:323] Executor asked to
>> shutdown
>> > > >> I0609 03:36:34.023632 31039 slave.cpp:448] Slave terminating
>> > > >> I0609 03:36:34.025377 31039 slave.cpp:1110] Asked to shut down
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 by @0.0.0.0:0
>> > > >> W0609 03:36:34.026823 31039 slave.cpp:1131] Ignoring shutdown
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 because it is terminating
>> > > >> I0609 03:36:34.020920 31041 hierarchical_allocator_process.hpp:616]
>> > > >> Recovered cpus=2; mem=512 (total allocatable: cpus=2; mem=512;
>> > ports=[];
>> > > >> disk=0) on slave 201306090336-982172483-39386-30401-0 from
>> framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> [       OK ] AllocatorTest/0.SlaveAdded (171 ms)
>> > > >> [ RUN      ] AllocatorTest/0.TaskFinished
>> > > >> I0609 03:36:34.029067 31040 master.cpp:228] Master started on
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.029115 31040 master.cpp:243] Master ID:
>> > > >> 201306090336-982172483-39386-30401
>> > > >> I0609 03:36:34.029798 31040 master.cpp:526] Elected as master!
>> > > >> I0609 03:36:34.029969 31035 hierarchical_allocator_process.hpp:295]
>> > > >> Initializing hierarchical allocator process with master :
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.029932 31037 slave.cpp:216] Slave started on 95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.030988 31037 slave.cpp:217] Slave resources: cpus=3;
>> > > >> mem=1024; ports=[31000-32000]; disk=23203
>> > > >> I0609 03:36:34.030282 31042 sched.cpp:177] New master at
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.032003 31037 slave.cpp:537] New master detected at
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.032490 31037 slave.cpp:552] Postponing registration
>> > until
>> > > >> recovery is complete
>> > > >> I0609 03:36:34.032076 31042 master.cpp:569] Registering framework
>> > > >> 201306090336-982172483-39386-30401-0000 at scheduler(86)@
>> > > >> 67.195.138.58:39386
>> > > >> W0609 03:36:34.029901 31041 master.cpp:83] No whitelist given.
>> > > >> Advertising offers for all slaves
>> > > >> I0609 03:36:34.032505 31038 status_update_manager.cpp:155] New
>> master
>> > > >> detected at [email protected]:39386
>> > > >> I0609 03:36:34.032948 31037 slave.cpp:398] Finished recovery
>> > > >> I0609 03:36:34.033500 31036 sched.cpp:222] Framework registered
>> with
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.033572 31042 hierarchical_allocator_process.hpp:327]
>> > > Added
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.036834 31040 master.cpp:891] Attempting to register
>> > slave
>> > > >> on quirinus.apache.org at slave(95)@67.195.138.58:39386
>> > > >> I0609 03:36:34.038187 31040 master.cpp:1851] Adding slave
>> > > >> 201306090336-982172483-39386-30401-0 at quirinus.apache.org with
>> > > cpus=3;
>> > > >> mem=1024; ports=[31000-32000]; disk=23203
>> > > >> I0609 03:36:34.038707 31035 slave.cpp:597] Registered with master
>> > > >> [email protected]:39386; given slave ID
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.037708 31042
>> hierarchical_allocator_process.hpp:705] No
>> > > >> resources available to allocate!
>> > > >> I0609 03:36:34.039695 31042 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 0 slaves in 1.986422ms
>> > > >> I0609 03:36:34.040237 31042 hierarchical_allocator_process.hpp:449]
>> > > Added
>> > > >> slave 201306090336-982172483-39386-30401-0 (quirinus.apache.org)
>> with
>> > > >> cpus=3; mem=1024; ports=[31000-32000]; disk=23203 (and cpus=3;
>> > mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 available)
>> > > >> I0609 03:36:34.040956 31042 hierarchical_allocator_process.hpp:727]
>> > > >> Offering cpus=3; mem=1024; ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0 to framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.041539 31042 hierarchical_allocator_process.hpp:687]
>> > > >> Performed allocation for slave
>> 201306090336-982172483-39386-30401-0 in
>> > > >> 591.65us
>> > > >> I0609 03:36:34.041575 31035 master.hpp:313] Adding offer
>> > > >> 201306090336-982172483-39386-30401-0 with resources cpus=3;
>> mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.042429 31035 master.cpp:1239] Sending 1 offers to
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.042917 31035 sched.cpp:287] Received 1 offers
>> > > >> I0609 03:36:34.043576 31039 master.cpp:1472] Processing reply for
>> > offer
>> > > >> 201306090336-982172483-39386-30401-0 on slave
>> > > >> 201306090336-982172483-39386-30401-0 (quirinus.apache.org) for
>> > > framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.044009 31039 master.hpp:291] Adding task 0 with
>> > resources
>> > > >> cpus=1; mem=256 on slave 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.044404 31039 master.cpp:1591] Launching task 0 of
>> > > >> framework 201306090336-982172483-39386-30401-0000 with resources
>> > cpus=1;
>> > > >> mem=256 on slave 201306090336-982172483-39386-30401-0 (
>> > > >> quirinus.apache.org)
>> > > >> I0609 03:36:34.044960 31035 slave.cpp:737] Got assigned task 0 for
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.044973 31039 master.hpp:291] Adding task 1 with
>> > resources
>> > > >> cpus=1; mem=256 on slave 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.046810 31039 master.cpp:1591] Launching task 1 of
>> > > >> framework 201306090336-982172483-39386-30401-0000 with resources
>> > cpus=1;
>> > > >> mem=256 on slave 201306090336-982172483-39386-30401-0 (
>> > > >> quirinus.apache.org)
>> > > >> I0609 03:36:34.047370 31039 master.hpp:323] Removing offer
>> > > >> 201306090336-982172483-39386-30401-0 with resources cpus=3;
>> mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.046365 31035 slave.cpp:835] Launching task 0 for
>> > > framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.047448 31040 hierarchical_allocator_process.hpp:526]
>> > > >> Framework 201306090336-982172483-39386-30401-0000 left cpus=1;
>> > mem=512;
>> > > >> ports=[31000-32000]; disk=23203 unused on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.048758 31040 hierarchical_allocator_process.hpp:569]
>> > > >> Framework 201306090336-982172483-39386-30401-0000 filtered slave
>> > > >> 201306090336-982172483-39386-30401-0 for 5secs
>> > > >> I0609 03:36:34.049358 31035 paths.hpp:303] Created executor
>> directory
>> > > >>
>> > >
>> >
>> '/tmp/AllocatorTest_0_TaskFinished_AQmj2p/slaves/201306090336-982172483-39386-30401-0/frameworks/201306090336-982172483-39386-30401-0000/executors/default/runs/586df89a-6e9e-4e7e-a4b3-fb2d2d918ac9'
>> > > >> I0609 03:36:34.049804 31035 slave.cpp:946] Queuing task '0' for
>> > executor
>> > > >> default of framework '201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.049964 31036 exec.cpp:170] Executor started at:
>> > > >> executor(34)@67.195.138.58:39386 with pid 30401
>> > > >> I0609 03:36:34.050179 31035 slave.cpp:737] Got assigned task 1 for
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.051460 31035 slave.cpp:519] Successfully attached
>> file
>> > > >>
>> > >
>> >
>> '/tmp/AllocatorTest_0_TaskFinished_AQmj2p/slaves/201306090336-982172483-39386-30401-0/frameworks/201306090336-982172483-39386-30401-0000/executors/default/runs/586df89a-6e9e-4e7e-a4b3-fb2d2d918ac9'
>> > > >> I0609 03:36:34.051898 31035 slave.cpp:1393] Got registration for
>> > > executor
>> > > >> 'default' of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.052384 31035 slave.cpp:1508] Flushing queued task 0
>> for
>> > > >> executor 'default' of framework
>> > 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.052829 31035 slave.cpp:835] Launching task 1 for
>> > > framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.053359 31035 slave.cpp:971] Sending task '1' to
>> > executor
>> > > >> 'default' of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.052407 31041 exec.cpp:194] Executor registered on
>> slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.054303 31041 exec.cpp:258] Executor asked to run
>> task
>> > '0'
>> > > >> I0609 03:36:34.055557 31041 exec.cpp:258] Executor asked to run
>> task
>> > '1'
>> > > >> I0609 03:36:34.056928 31041 exec.cpp:404] Executor sending status
>> > update
>> > > >> TASK_RUNNING (UUID: 38d9515a-7b89-4755-8585-96069a67c492) for task
>> 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.057016 31037 slave.cpp:1740] Handling status update
>> > > >> TASK_RUNNING (UUID: 38d9515a-7b89-4755-8585-96069a67c492) for task
>> 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.057587 31036 status_update_manager.cpp:290] Received
>> > > >> status update TASK_RUNNING (UUID:
>> > 38d9515a-7b89-4755-8585-96069a67c492)
>> > > for
>> > > >> task 0 of framework 201306090336-982172483-39386-30401-0000 with
>> > > >> checkpoint=false
>> > > >> I0609 03:36:34.057970 31036 status_update_manager.cpp:450] Creating
>> > > >> StatusUpdate stream for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.058506 31036 status_update_manager.cpp:336]
>> Forwarding
>> > > >> status update TASK_RUNNING (UUID:
>> > 38d9515a-7b89-4755-8585-96069a67c492)
>> > > for
>> > > >> task 0 of framework 201306090336-982172483-39386-30401-0000 to
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.057991 31041 exec.cpp:404] Executor sending status
>> > update
>> > > >> TASK_FINISHED (UUID: d79b6c76-5371-410e-a5af-c96af05bac40) for
>> task 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.059213 31035 master.cpp:1022] Status update from
>> > > slave(95)@
>> > > >> 67.195.138.58:39386: task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000 is now in state
>> TASK_RUNNING
>> > > >> I0609 03:36:34.059249 31039 slave.cpp:1796] Status update manager
>> > > >> successfully handled status update TASK_RUNNING (UUID:
>> > > >> 38d9515a-7b89-4755-8585-96069a67c492) for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.060806 31035 sched.cpp:332] Received status update
>> > > >> TASK_RUNNING (UUID: 38d9515a-7b89-4755-8585-96069a67c492) for task
>> 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 from slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.061774 31035 sched.cpp:365] Sending ACK for status
>> > update
>> > > >> TASK_RUNNING (UUID: 38d9515a-7b89-4755-8585-96069a67c492) for task
>> 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 to slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.061339 31041 exec.cpp:404] Executor sending status
>> > update
>> > > >> TASK_RUNNING (UUID: 3e0d0d8b-4d02-40c3-aad8-4b874002218a) for task
>> 1
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.061235 31039 slave.cpp:1802] Sending acknowledgement
>> > for
>> > > >> status update TASK_RUNNING (UUID:
>> > 38d9515a-7b89-4755-8585-96069a67c492)
>> > > for
>> > > >> task 0 of framework 201306090336-982172483-39386-30401-0000 to
>> > > executor(34)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.063271 31039 slave.cpp:1740] Handling status update
>> > > >> TASK_FINISHED (UUID: d79b6c76-5371-410e-a5af-c96af05bac40) for
>> task 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.063272 31042 exec.cpp:290] Executor received status
>> > > update
>> > > >> acknowledgement 38d9515a-7b89-4755-8585-96069a67c492 for task 0 of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.063863 31035 status_update_manager.cpp:290] Received
>> > > >> status update TASK_FINISHED (UUID:
>> > d79b6c76-5371-410e-a5af-c96af05bac40)
>> > > >> for task 0 of framework 201306090336-982172483-39386-30401-0000
>> with
>> > > >> checkpoint=false
>> > > >> I0609 03:36:34.064810 31035 status_update_manager.cpp:360] Received
>> > > >> status update acknowledgement 38d9515a-7b89-4755-8585-96069a67c492
>> for
>> > > task
>> > > >> 0 of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.065265 31035 status_update_manager.cpp:336]
>> Forwarding
>> > > >> status update TASK_FINISHED (UUID:
>> > d79b6c76-5371-410e-a5af-c96af05bac40)
>> > > >> for task 0 of framework 201306090336-982172483-39386-30401-0000 to
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.063911 31039 slave.cpp:1740] Handling status update
>> > > >> TASK_RUNNING (UUID: 3e0d0d8b-4d02-40c3-aad8-4b874002218a) for task
>> 1
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.066308 31039 slave.cpp:1796] Status update manager
>> > > >> successfully handled status update TASK_FINISHED (UUID:
>> > > >> d79b6c76-5371-410e-a5af-c96af05bac40) for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.066730 31039 slave.cpp:1802] Sending acknowledgement
>> > for
>> > > >> status update TASK_FINISHED (UUID:
>> > d79b6c76-5371-410e-a5af-c96af05bac40)
>> > > >> for task 0 of framework 201306090336-982172483-39386-30401-0000 to
>> > > >> executor(34)@67.195.138.58:39386
>> > > >> I0609 03:36:34.066335 31042 status_update_manager.cpp:290] Received
>> > > >> status update TASK_RUNNING (UUID:
>> > 3e0d0d8b-4d02-40c3-aad8-4b874002218a)
>> > > for
>> > > >> task 1 of framework 201306090336-982172483-39386-30401-0000 with
>> > > >> checkpoint=false
>> > > >> I0609 03:36:34.065812 31038 master.cpp:1022] Status update from
>> > > slave(95)@
>> > > >> 67.195.138.58:39386: task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000 is now in state
>> TASK_FINISHED
>> > > >> I0609 03:36:34.068521 31038 master.hpp:303] Removing task 0 with
>> > > >> resources cpus=1; mem=256 on slave
>> > 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.067533 31041 exec.cpp:290] Executor received status
>> > > update
>> > > >> acknowledgement d79b6c76-5371-410e-a5af-c96af05bac40 for task 0 of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.067988 31042 status_update_manager.cpp:450] Creating
>> > > >> StatusUpdate stream for task 1 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.069990 31042 status_update_manager.cpp:336]
>> Forwarding
>> > > >> status update TASK_RUNNING (UUID:
>> > 3e0d0d8b-4d02-40c3-aad8-4b874002218a)
>> > > for
>> > > >> task 1 of framework 201306090336-982172483-39386-30401-0000 to
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.070499 31038 master.cpp:1022] Status update from
>> > > slave(95)@
>> > > >> 67.195.138.58:39386: task 1 of framework
>> > > >> 201306090336-982172483-39386-30401-0000 is now in state
>> TASK_RUNNING
>> > > >> I0609 03:36:34.069077 31035 hierarchical_allocator_process.hpp:616]
>> > > >> Recovered cpus=1; mem=256 (total allocatable: cpus=2; mem=768;
>> > > >> ports=[31000-32000]; disk=23203) on slave
>> > > >> 201306090336-982172483-39386-30401-0 from framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.068528 31037 sched.cpp:332] Received status update
>> > > >> TASK_FINISHED (UUID: d79b6c76-5371-410e-a5af-c96af05bac40) for
>> task 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 from slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.067494 31039 slave.cpp:1341] Status update manager
>> > > >> successfully handled status update acknowledgement
>> > > >> 38d9515a-7b89-4755-8585-96069a67c492 for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.073231 31039 slave.cpp:1796] Status update manager
>> > > >> successfully handled status update TASK_RUNNING (UUID:
>> > > >> 3e0d0d8b-4d02-40c3-aad8-4b874002218a) for task 1 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.073705 31039 slave.cpp:1802] Sending acknowledgement
>> > for
>> > > >> status update TASK_RUNNING (UUID:
>> > 3e0d0d8b-4d02-40c3-aad8-4b874002218a)
>> > > for
>> > > >> task 1 of framework 201306090336-982172483-39386-30401-0000 to
>> > > executor(34)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.072808 31037 sched.cpp:332] Received status update
>> > > >> TASK_RUNNING (UUID: 3e0d0d8b-4d02-40c3-aad8-4b874002218a) for task
>> 1
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 from slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.074741 31037 sched.cpp:365] Sending ACK for status
>> > update
>> > > >> TASK_FINISHED (UUID: d79b6c76-5371-410e-a5af-c96af05bac40) for
>> task 0
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 to slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.075208 31037 sched.cpp:365] Sending ACK for status
>> > update
>> > > >> TASK_RUNNING (UUID: 3e0d0d8b-4d02-40c3-aad8-4b874002218a) for task
>> 1
>> > of
>> > > >> framework 201306090336-982172483-39386-30401-0000 to slave(95)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.074265 31039 exec.cpp:290] Executor received status
>> > > update
>> > > >> acknowledgement 3e0d0d8b-4d02-40c3-aad8-4b874002218a for task 1 of
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.075285 31036 status_update_manager.cpp:360] Received
>> > > >> status update acknowledgement d79b6c76-5371-410e-a5af-c96af05bac40
>> for
>> > > task
>> > > >> 0 of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.076943 31036 status_update_manager.cpp:481]
>> Cleaning up
>> > > >> status update stream for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.077440 31036 status_update_manager.cpp:360] Received
>> > > >> status update acknowledgement 3e0d0d8b-4d02-40c3-aad8-4b874002218a
>> for
>> > > task
>> > > >> 1 of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.077447 31042 slave.cpp:1341] Status update manager
>> > > >> successfully handled status update acknowledgement
>> > > >> d79b6c76-5371-410e-a5af-c96af05bac40 for task 0 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> E0609 03:36:34.078451 31042 slave.cpp:1365] Status update
>> > > acknowledgement
>> > > >> d79b6c76-5371-410e-a5af-c96af05bac40 for task 0 of unknown executor
>> > > >> I0609 03:36:34.079704 31042 slave.cpp:1341] Status update manager
>> > > >> successfully handled status update acknowledgement
>> > > >> 3e0d0d8b-4d02-40c3-aad8-4b874002218a for task 1 of framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.080783 31036 hierarchical_allocator_process.hpp:727]
>> > > >> Offering cpus=2; mem=768; ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0 to framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.081121 31036 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 354.973us
>> > > >> I0609 03:36:34.081166 31040 master.hpp:313] Adding offer
>> > > >> 201306090336-982172483-39386-30401-1 with resources cpus=2;
>> mem=768;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.082161 31040 master.cpp:1239] Sending 1 offers to
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.082645 31038 sched.cpp:287] Received 1 offers
>> > > >> I0609 03:36:34.083261 31035 sched.cpp:427] Stopping framework
>> > > >> '201306090336-982172483-39386-30401-0000'
>> > > >> I0609 03:36:34.083282 30401 master.cpp:385] Master terminating
>> > > >> I0609 03:36:34.084305 31040 slave.cpp:493] Slave asked to shut
>> down by
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.084308 30401 master.cpp:207] Shutting down master
>> > > >> I0609 03:36:34.085300 31037 hierarchical_allocator_process.hpp:412]
>> > > >> Deactivated framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.085295 30401 master.hpp:303] Removing task 1 with
>> > > >> resources cpus=1; mem=256 on slave
>> > 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.084761 31040 slave.cpp:1110] Asked to shut down
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 by
>> [email protected]:39386
>> > > >> I0609 03:36:34.086302 30401 master.hpp:323] Removing offer
>> > > >> 201306090336-982172483-39386-30401-1 with resources cpus=2;
>> mem=768;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.086333 31037 hierarchical_allocator_process.hpp:616]
>> > > >> Recovered cpus=1; mem=256 (total allocatable: cpus=1; mem=256;
>> > ports=[];
>> > > >> disk=0) on slave 201306090336-982172483-39386-30401-0 from
>> framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.087719 31037 hierarchical_allocator_process.hpp:616]
>> > > >> Recovered cpus=2; mem=768; ports=[31000-32000]; disk=23203 (total
>> > > >> allocatable: cpus=3; mem=1024; ports=[31000-32000]; disk=23203) on
>> > slave
>> > > >> 201306090336-982172483-39386-30401-0 from framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.086688 31040 slave.cpp:1135] Shutting down framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.089313 31040 slave.cpp:2312] Shutting down executor
>> > > >> 'default' of framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.089781 31040 slave.cpp:448] Slave terminating
>> > > >> I0609 03:36:34.090240 31040 slave.cpp:1110] Asked to shut down
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 by @0.0.0.0:0
>> > > >> W0609 03:36:34.090737 31040 slave.cpp:1131] Ignoring shutdown
>> > framework
>> > > >> 201306090336-982172483-39386-30401-0000 because it is terminating
>> > > >> I0609 03:36:34.089784 31035 exec.cpp:323] Executor asked to
>> shutdown
>> > > >> I0609 03:36:34.088877 31037 hierarchical_allocator_process.hpp:367]
>> > > >> Removed framework 201306090336-982172483-39386-30401-0000
>> > > >> [       OK ] AllocatorTest/0.TaskFinished (65 ms)
>> > > >> [ RUN      ] AllocatorTest/0.WhitelistSlave
>> > > >> I0609 03:36:34.093596 31038 master.cpp:228] Master started on
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.093652 31038 master.cpp:243] Master ID:
>> > > >> 201306090336-982172483-39386-30401
>> > > >> I0609 03:36:34.094219 31041 slave.cpp:216] Slave started on 96)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.094543 31042 hierarchical_allocator_process.hpp:295]
>> > > >> Initializing hierarchical allocator process with master :
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.094588 31038 master.cpp:526] Elected as master!
>> > > >> I0609 03:36:34.094712 31041 slave.cpp:217] Slave resources: cpus=2;
>> > > >> mem=1024; ports=[31000-32000]; disk=23203
>> > > >> I0609 03:36:34.094830 31037 sched.cpp:177] New master at
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.095294 31042 hierarchical_allocator_process.hpp:491]
>> > > >> Updated slave white list: { dummy-slave }
>> > > >> I0609 03:36:34.097889 31042
>> hierarchical_allocator_process.hpp:700] No
>> > > >> users to allocate resources!
>> > > >> I0609 03:36:34.098345 31042 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 0 slaves in 457.169us
>> > > >> I0609 03:36:34.097466 31041 slave.cpp:537] New master detected at
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.099395 31041 slave.cpp:552] Postponing registration
>> > until
>> > > >> recovery is complete
>> > > >> I0609 03:36:34.099865 31041 slave.cpp:398] Finished recovery
>> > > >> I0609 03:36:34.099416 31038 status_update_manager.cpp:155] New
>> master
>> > > >> detected at [email protected]:39386
>> > > >> I0609 03:36:34.097636 31040 master.cpp:569] Registering framework
>> > > >> 201306090336-982172483-39386-30401-0000 at scheduler(87)@
>> > > >> 67.195.138.58:39386
>> > > >> I0609 03:36:34.101644 31037 sched.cpp:222] Framework registered
>> with
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.101655 31040 master.cpp:556] Framework
>> > > >> 201306090336-982172483-39386-30401-0000 (scheduler(87)@
>> > > >> 67.195.138.58:39386) already registered, resending acknowledgement
>> > > >> I0609 03:36:34.101721 31042 hierarchical_allocator_process.hpp:327]
>> > > Added
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.103056 31042
>> hierarchical_allocator_process.hpp:705] No
>> > > >> resources available to allocate!
>> > > >> I0609 03:36:34.103551 31042 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 0 slaves in 495.298us
>> > > >> I0609 03:36:34.102608 31040 master.cpp:891] Attempting to register
>> > slave
>> > > >> on quirinus.apache.org at slave(96)@67.195.138.58:39386
>> > > >> I0609 03:36:34.105286 31040 master.cpp:1851] Adding slave
>> > > >> 201306090336-982172483-39386-30401-0 at quirinus.apache.org with
>> > > cpus=2;
>> > > >> mem=1024; ports=[31000-32000]; disk=23203
>> > > >> I0609 03:36:34.105808 31039 slave.cpp:597] Registered with master
>> > > >> [email protected]:39386; given slave ID
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.102613 31035 sched.cpp:217] Ignoring framework
>> > registered
>> > > >> message because the driver is already connected!
>> > > >> I0609 03:36:34.105860 31040 master.cpp:880] Slave
>> > > >> 201306090336-982172483-39386-30401-0 (quirinus.apache.org) already
>> > > >> registered, resending acknowledgement
>> > > >> I0609 03:36:34.105983 31036 hierarchical_allocator_process.hpp:449]
>> > > Added
>> > > >> slave 201306090336-982172483-39386-30401-0 (quirinus.apache.org)
>> with
>> > > >> cpus=2; mem=1024; ports=[31000-32000]; disk=23203 (and cpus=2;
>> > mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 available)
>> > > >> I0609 03:36:34.114892 31039 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 16.043us
>> > > >> I0609 03:36:34.124933 31040 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 10.359us
>> > > >> I0609 03:36:34.135016 31037 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 21.209us
>> > > >> I0609 03:36:34.145316 31041 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 17.78us
>> > > >> I0609 03:36:34.145419 31042 hierarchical_allocator_process.hpp:491]
>> > > >> Updated slave white list: { dummy-slave, quirinus.apache.org }
>> > > >> I0609 03:36:34.155143 31040 hierarchical_allocator_process.hpp:727]
>> > > >> Offering cpus=2; mem=1024; ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0 to framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.155275 31040 hierarchical_allocator_process.hpp:667]
>> > > >> Performed allocation for 1 slaves in 169.208us
>> > > >> I0609 03:36:34.155313 31041 master.hpp:313] Adding offer
>> > > >> 201306090336-982172483-39386-30401-0 with resources cpus=2;
>> mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.155946 31041 master.cpp:1239] Sending 1 offers to
>> > > >> framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.156447 31036 sched.cpp:287] Received 1 offers
>> > > >> I0609 03:36:34.165210 31041 sched.cpp:427] Stopping framework
>> > > >> '201306090336-982172483-39386-30401-0000'
>> > > >> I0609 03:36:34.165271 30401 master.cpp:385] Master terminating
>> > > >> I0609 03:36:34.165591 30401 master.cpp:207] Shutting down master
>> > > >> I0609 03:36:34.166053 31036 hierarchical_allocator_process.hpp:412]
>> > > >> Deactivated framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.166080 30401 master.hpp:323] Removing offer
>> > > >> 201306090336-982172483-39386-30401-0 with resources cpus=2;
>> mem=1024;
>> > > >> ports=[31000-32000]; disk=23203 on slave
>> > > >> 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.165608 31041 slave.cpp:493] Slave asked to shut
>> down by
>> > > >> [email protected]:39386
>> > > >> I0609 03:36:34.167063 31036 hierarchical_allocator_process.hpp:616]
>> > > >> Recovered cpus=2; mem=1024; ports=[31000-32000]; disk=23203 (total
>> > > >> allocatable: cpus=2; mem=1024; ports=[31000-32000]; disk=23203) on
>> > slave
>> > > >> 201306090336-982172483-39386-30401-0 from framework
>> > > >> 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.169354 31036 hierarchical_allocator_process.hpp:367]
>> > > >> Removed framework 201306090336-982172483-39386-30401-0000
>> > > >> I0609 03:36:34.169810 31036 hierarchical_allocator_process.hpp:477]
>> > > >> Removed slave 201306090336-982172483-39386-30401-0
>> > > >> I0609 03:36:34.168808 31041 slave.cpp:448] Slave terminating
>> > > >> [       OK ] AllocatorTest/0.WhitelistSlave (78 ms)
>> > > >> [----------] 9 tests from AllocatorTest/0 (570 ms total)
>> > > >>
>> > > >> [----------] 1 test from LoggingTest
>> > > >> [ RUN      ] LoggingTest.Toggle
>> > > >> I0609 03:36:34.171416 31035 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.172097 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> I0609 03:36:34.172571 31042 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.173122 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> I0609 03:36:34.173508 31039 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.174036 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> I0609 03:36:34.174438 31039 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.174983 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> I0609 03:36:34.175534 31042 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.176033 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> I0609 03:36:34.176769 31036 process.cpp:2942] Handling HTTP event
>> for
>> > > >> process 'logging' with path: '/logging/toggle'
>> > > >> I0609 03:36:34.214529 31043 process.cpp:878] Socket closed while
>> > > receiving
>> > > >> [       OK ] LoggingTest.Toggle (43 ms)
>> > > >> [----------] 1 test from LoggingTest (43 ms total)
>> > > >>
>> > > >> [----------] 5 tests from CgroupsCpusetTest
>> > > >> [ RUN      ] CgroupsCpusetTest.OneCPUOneCpuset
>> > > >> [       OK ] CgroupsCpusetTest.OneCPUOneCpuset (0 ms)
>> > > >> [ RUN      ] CgroupsCpusetTest.OneCPUManyCpusets
>> > > >> [       OK ] CgroupsCpusetTest.OneCPUManyCpusets (0 ms)
>> > > >> [ RUN      ] CgroupsCpusetTest.ManyCPUOneCpuset
>> > > >> [       OK ] CgroupsCpusetTest.ManyCPUOneCpuset (1 ms)
>> > > >> [ RUN      ] CgroupsCpusetTest.ManyCPUManyCpusets
>> > > >> [       OK ] CgroupsCpusetTest.ManyCPUManyCpusets (0 ms)
>> > > >> [ RUN      ] CgroupsCpusetTest.IntegerAllocations
>> > > >> [       OK ] CgroupsCpusetTest.IntegerAllocations (0 ms)
>> > > >> [----------] 5 tests from CgroupsCpusetTest (1 ms total)
>> > > >>
>> > > >> [----------] 3 tests from FsTest
>> > > >> [ RUN      ] FsTest.MountTableRead
>> > > >> [       OK ] FsTest.MountTableRead (32 ms)
>> > > >> [ RUN      ] FsTest.MountTableHasOption
>> > > >> [       OK ] FsTest.MountTableHasOption (0 ms)
>> > > >> [ RUN      ] FsTest.FileSystemTableRead
>> > > >> [       OK ] FsTest.FileSystemTableRead (27 ms)
>> > > >> [----------] 3 tests from FsTest (59 ms total)
>> > > >>
>> > > >> [----------] Global test environment tear-down
>> > > >> [==========] 165 tests from 33 test cases ran. (56358 ms total)
>> > > >> [  PASSED  ] 164 tests.
>> > > >> [  FAILED  ] 1 test, listed below:
>> > > >> [  FAILED  ] CoordinatorTest.TruncateNotLearnedFill
>> > > >>
>> > > >>  1 FAILED TEST
>> > > >> FAIL: mesos-tests
>> > > >> ==================
>> > > >> 1 of 1 test failed
>> > > >> ==================
>> > > >> make[3]: *** [check-TESTS] Error 1
>> > > >> make[3]: Leaving directory `<
>> > > >>
>> > >
>> >
>> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/ws/build/src
>> > > >> '>
>> > > >> make[2]: *** [check-am] Error 2
>> > > >> make[2]: Leaving directory `<
>> > > >>
>> > >
>> >
>> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/ws/build/src
>> > > >> '>
>> > > >> make[1]: *** [check] Error 2
>> > > >> make[1]: Leaving directory `<
>> > > >>
>> > >
>> >
>> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-Out-Of-Src-Disable-Java-Disable-Python-Disable-Webui/ws/build/src
>> > > >> '>
>> > > >> make: *** [check-recursive] Error 1
>> > > >> I0609 03:36:35.402555  2251 exec.cpp:83] Committing suicide by
>> killing
>> > > >> the process group
>> > > >> Build step 'Execute shell' marked build as failure
>> > > >>
>> > > >
>> > > >
>> > >
>> >
>>
>
>

Reply via email to