Known: https://issues.apache.org/jira/browse/MESOS-534


On Fri, Jun 28, 2013 at 8:56 AM, Apache Jenkins Server <
[email protected]> wrote:

> See <
> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/1087/
> >
>
> ------------------------------------------
> [...truncated 15816 lines...]
> I0628 15:56:47.176548  7826 exec.cpp:290] Executor received status update
> acknowledgement 6d2273e6-0325-4fd2-b313-279078ac52de for task 0 of
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.176676  7827 status_update_manager.cpp:360] Received status
> update acknowledgement 6d2273e6-0325-4fd2-b313-279078ac52de for task 0 of
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.176895  7824 slave.cpp:1342] Status update manager
> successfully handled status update acknowledgement
> 6d2273e6-0325-4fd2-b313-279078ac52de for task 0 of framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177011  7801 master.cpp:385] Master terminating
> I0628 15:56:47.177124  7801 master.cpp:207] Shutting down master
> I0628 15:56:47.177201  7826 slave.cpp:1883] [email protected]:36742exited
> W0628 15:56:47.177249  7826 slave.cpp:1886] Master disconnected! Waiting
> for a new master to be elected
> I0628 15:56:47.177297  7801 master.hpp:303] Removing task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:47.177266  7823 hierarchical_allocator_process.hpp:412]
> Deactivated framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177340  7826 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by [email protected]:36742
> I0628 15:56:47.177561  7826 slave.cpp:1136] Shutting down framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177623  7823 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=500 (total allocatable: cpus=2; mem=1024;
> ports=[31000-32000]; disk=19149) on slave
> 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177646  7826 slave.cpp:2327] Shutting down executor
> 'default' of framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177801  7823 hierarchical_allocator_process.hpp:367]
> Removed framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:47.177857  7828 exec.cpp:323] Executor asked to shutdown
> I0628 15:56:47.178652  7828 master.cpp:228] Master started on
> 67.195.138.8:36742
> I0628 15:56:47.178724  7828 master.cpp:243] Master ID:
> 201306281556-143311683-36742-7801
> I0628 15:56:47.180178  7826 detector.cpp:420] Master detector (slave(99)@
> 67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:47.180253  7829 detector.cpp:420] Master detector
> (scheduler(90)@67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:47.180850  7825 detector.cpp:234] Master detector (
> [email protected]:36742) connected to ZooKeeper ...
> I0628 15:56:47.185730  7826 detector.cpp:441] Master detector (slave(99)@
> 67.195.138.8:36742) couldn't find any masters
> W0628 15:56:47.185811  7827 master.cpp:83] No whitelist given. Advertising
> offers for all slaves
> I0628 15:56:47.185834  7829 detector.cpp:441] Master detector
> (scheduler(90)@67.195.138.8:36742) couldn't find any masters
> I0628 15:56:47.185925  7825 detector.cpp:251] Trying to create path
> '/znode' in ZooKeeper
> I0628 15:56:47.186028  7823 hierarchical_allocator_process.hpp:295]
> Initializing hierarchical allocator process with master :
> [email protected]:36742
> I0628 15:56:47.186071  7826 slave.cpp:562] Lost master(s) ... waiting
> I0628 15:56:47.186286  7822 sched.cpp:194] No master detected, waiting for
> another master
> I0628 15:56:47.187988  7825 detector.cpp:281] Created ephemeral/sequence
> znode at '/znode/0000000002'
> I0628 15:56:47.188357  7829 detector.cpp:420] Master detector
> (scheduler(90)@67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:47.188539  7822 detector.cpp:420] Master detector (slave(99)@
> 67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:47.188806  7825 detector.cpp:420] Master detector (
> [email protected]:36742)  found 1 registered masters
> I0628 15:56:47.194286  7829 detector.cpp:467] Master detector
> (scheduler(90)@67.195.138.8:36742)  got new master pid:
> [email protected]:36742
> I0628 15:56:47.194411  7829 sched.cpp:177] New master at
> [email protected]:36742
> I0628 15:56:47.194476  7822 detector.cpp:467] Master detector (slave(99)@
> 67.195.138.8:36742)  got new master pid: [email protected]:36742
> W0628 15:56:47.194511  7827 master.cpp:591] Ignoring re-register framework
> message since not elected yet
> I0628 15:56:47.194653  7825 detector.cpp:467] Master detector (
> [email protected]:36742)  got new master pid: [email protected]:36742
> I0628 15:56:47.194706  7823 slave.cpp:528] New master detected at
> [email protected]:36742
> I0628 15:56:47.194823  7825 master.cpp:526] Elected as master!
> I0628 15:56:47.194875  7826 status_update_manager.cpp:155] New master
> detected at [email protected]:36742
> I0628 15:56:48.170889  7828 master.cpp:604] Re-registering framework
> 201306281556-143311683-36742-7801-0000 at scheduler(90)@67.195.138.8:36742
> I0628 15:56:48.171073  7824 sched.cpp:246] Framework re-registered with
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.171109  7828 hierarchical_allocator_process.hpp:327] Added
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.171228  7828 hierarchical_allocator_process.hpp:705] No
> resources available to allocate!
> I0628 15:56:48.171299  7828 hierarchical_allocator_process.hpp:667]
> Performed allocation for 0 slaves in 70.86us
> I0628 15:56:48.186911  7822 hierarchical_allocator_process.hpp:705] No
> resources available to allocate!
> I0628 15:56:48.186985  7822 hierarchical_allocator_process.hpp:667]
> Performed allocation for 0 slaves in 75.263us
> I0628 15:56:48.195159  7827 master.cpp:963] Attempting to re-register
> slave 201306281556-143311683-36742-7801-0 at slave(99)@67.195.138.8:36742(
> minerva.apache.org)
> I0628 15:56:48.195207  7827 master.cpp:1851] Adding slave
> 201306281556-143311683-36742-7801-0 at minerva.apache.org with cpus=2;
> mem=1024; ports=[31000-32000]; disk=19149
> I0628 15:56:48.195322  7826 slave.cpp:629] Re-registered with master
> [email protected]:36742
> I0628 15:56:48.195369  7827 master.hpp:291] Adding task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> W0628 15:56:48.195562  7829 slave.cpp:1272] Ignoring updating pid for
> framework 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:48.195663  7822 hierarchical_allocator_process.hpp:449] Added
> slave 201306281556-143311683-36742-7801-0 (minerva.apache.org) with
> cpus=2; mem=1024; ports=[31000-32000]; disk=19149 (and cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 available)
> I0628 15:56:48.195817  7822 hierarchical_allocator_process.hpp:727]
> Offering cpus=1; mem=524; ports=[31000-32000]; disk=19149 on slave
> 201306281556-143311683-36742-7801-0 to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.195965  7822 hierarchical_allocator_process.hpp:687]
> Performed allocation for slave 201306281556-143311683-36742-7801-0 in
> 181.219us
> I0628 15:56:48.196027  7826 master.hpp:313] Adding offer
> 201306281556-143311683-36742-7801-0 with resources cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:48.196120  7826 master.cpp:1239] Sending 1 offers to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.196437  7826 master.cpp:385] Master terminating
> I0628 15:56:48.196444  7827 sched.cpp:427] Stopping framework
> '201306281556-143311683-36742-7801-0000'
> I0628 15:56:48.196528  7823 slave.cpp:484] Slave asked to shut down by
> [email protected]:36742
> I0628 15:56:48.196537  7801 master.cpp:207] Shutting down master
> I0628 15:56:48.196786  7801 master.hpp:303] Removing task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:48.196794  7822 hierarchical_allocator_process.hpp:412]
> Deactivated framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.196964  7801 master.hpp:323] Removing offer
> 201306281556-143311683-36742-7801-0 with resources cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:48.197072  7829 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=500 (total allocatable: cpus=1; mem=500; ports=[];
> disk=0) on slave 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.196655  7823 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by [email protected]:36742
> W0628 15:56:48.197228  7823 slave.cpp:1132] Ignoring shutdown framework
> 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:48.205646  7823 slave.cpp:439] Slave terminating
> I0628 15:56:48.198468  7826 detector.cpp:420] Master detector (slave(99)@
> 67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:48.205828  7826 detector.cpp:441] Master detector (slave(99)@
> 67.195.138.8:36742) couldn't find any masters
> I0628 15:56:48.205767  7823 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by @0.0.0.0:0
> W0628 15:56:48.206044  7823 slave.cpp:1132] Ignoring shutdown framework
> 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:48.197237  7829 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=524; ports=[31000-32000]; disk=19149 (total
> allocatable: cpus=2; mem=1024; ports=[31000-32000]; disk=19149) on slave
> 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.206367  7829 hierarchical_allocator_process.hpp:367]
> Removed framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.206434  7829 hierarchical_allocator_process.hpp:477]
> Removed slave 201306281556-143311683-36742-7801-0
> [       OK ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst (2054 ms)
> [ RUN      ] AllocatorZooKeeperTest/0.SlaveReregistersFirst
> I0628 15:56:48.207195  7823 master.cpp:228] Master started on
> 67.195.138.8:36742
> I0628 15:56:48.207254  7823 master.cpp:243] Master ID:
> 201306281556-143311683-36742-7801
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@658: Client
> environment:zookeeper.version=zookeeper C client 3.3.4
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@662: Client
> environment:host.name=minerva
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@669: Client
> environment:os.name=Linux
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@670: Client
> environment:os.arch=3.2.0-38-generic
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@671: Client
> environment:os.version=#61-Ubuntu SMP Tue Feb 19 12:18:21 UTC 2013
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@679: Client
> environment:user.name=(null)
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@687: Client
> environment:user.home=/home/jenkins
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@log_env@699: Client
> environment:user.dir=<
> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/ws/src
> >
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_INFO@zookeeper_init@727:
> Initiating client connection, host=127.0.0.1:47283 sessionTimeout=10000
> watcher=0x2abb5f3e9b60 sessionId=0 sessionPasswd=<null>
> context=0x2abb8800c5d0 flags=0
> 2013-06-28 15:56:48,252:7801(0x2abb626dd700):ZOO_DEBUG@start_threads@152:
> starting threads...
> 2013-06-28 15:56:48,253:7801(0x2abc71176700):ZOO_DEBUG@do_io@279: started
> IO thread
> 2013-06-28 15:56:48,253:7801(0x2abc71377700):ZOO_DEBUG@do_completion@326:
> started completion thread
> I0628 15:56:48.208644  7824 detector.cpp:234] Master detector (
> [email protected]:36742) connected to ZooKeeper ...
> I0628 15:56:48.248466  7829 hierarchical_allocator_process.hpp:295]
> Initializing hierarchical allocator process with master :
> [email protected]:36742
> W0628 15:56:48.248481  7822 master.cpp:83] No whitelist given. Advertising
> offers for all slaves
> I0628 15:56:48.252524  7826 slave.cpp:112] Slave started on 100)@
> 67.195.138.8:36742
> I0628 15:56:48.254101  7828 detector.cpp:234] Master detector (slave(100)@
> 67.195.138.8:36742) connected to ZooKeeper ...
> I0628 15:56:48.254555  7823 detector.cpp:234] Master detector
> (scheduler(91)@67.195.138.8:36742) connected to ZooKeeper ...
> I0628 15:56:48.261678  7824 detector.cpp:251] Trying to create path
> '/znode' in ZooKeeper
> I0628 15:56:48.261997  7828 detector.cpp:251] Trying to create path
> '/znode' in ZooKeeper
> I0628 15:56:48.262030  7826 slave.cpp:204] Slave resources: cpus=2;
> mem=1024; ports=[31000-32000]; disk=19149
> I0628 15:56:48.262099  7823 detector.cpp:251] Trying to create path
> '/znode' in ZooKeeper
> I0628 15:56:48.262823  7822 slave.cpp:389] Finished recovery
> I0628 15:56:48.264338  7824 detector.cpp:281] Created ephemeral/sequence
> znode at '/znode/0000000004'
> I0628 15:56:48.264493  7828 detector.cpp:420] Master detector (slave(100)@
> 67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:48.264744  7823 detector.cpp:420] Master detector
> (scheduler(91)@67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:48.264917  7824 detector.cpp:420] Master detector (
> [email protected]:36742)  found 1 registered masters
> I0628 15:56:48.270197  7828 detector.cpp:467] Master detector (slave(100)@
> 67.195.138.8:36742)  got new master pid: [email protected]:36742
> I0628 15:56:48.270315  7828 slave.cpp:528] New master detected at
> [email protected]:36742
> I0628 15:56:48.270406  7827 status_update_manager.cpp:155] New master
> detected at [email protected]:36742
> W0628 15:56:48.270467  7825 master.cpp:872] Ignoring register slave
> message from minerva.apache.org since not elected yet
> I0628 15:56:48.270454  7824 detector.cpp:467] Master detector (
> [email protected]:36742)  got new master pid: [email protected]:36742
> I0628 15:56:48.270699  7824 master.cpp:526] Elected as master!
> I0628 15:56:48.270792  7823 detector.cpp:467] Master detector
> (scheduler(91)@67.195.138.8:36742)  got new master pid:
> [email protected]:36742
> I0628 15:56:48.270874  7823 sched.cpp:177] New master at
> [email protected]:36742
> I0628 15:56:48.271005  7823 master.cpp:569] Registering framework
> 201306281556-143311683-36742-7801-0000 at scheduler(91)@67.195.138.8:36742
> I0628 15:56:48.271086  7824 hierarchical_allocator_process.hpp:327] Added
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:48.271111  7824 hierarchical_allocator_process.hpp:705] No
> resources available to allocate!
> I0628 15:56:48.271224  7824 hierarchical_allocator_process.hpp:667]
> Performed allocation for 0 slaves in 113.206us
> I0628 15:56:48.271131  7823 sched.cpp:222] Framework registered with
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.262648  7828 hierarchical_allocator_process.hpp:705] No
> resources available to allocate!
> I0628 15:56:49.262730  7828 hierarchical_allocator_process.hpp:667]
> Performed allocation for 0 slaves in 96.798us
> I0628 15:56:49.270871  7823 master.cpp:891] Attempting to register slave
> on minerva.apache.org at slave(100)@67.195.138.8:36742
> I0628 15:56:49.270900  7823 master.cpp:1851] Adding slave
> 201306281556-143311683-36742-7801-0 at minerva.apache.org with cpus=2;
> mem=1024; ports=[31000-32000]; disk=19149
> I0628 15:56:49.270987  7828 slave.cpp:588] Registered with master
> [email protected]:36742; given slave ID
> 201306281556-143311683-36742-7801-0
> I0628 15:56:49.271107  7824 hierarchical_allocator_process.hpp:449] Added
> slave 201306281556-143311683-36742-7801-0 (minerva.apache.org) with
> cpus=2; mem=1024; ports=[31000-32000]; disk=19149 (and cpus=2; mem=1024;
> ports=[31000-32000]; disk=19149 available)
> I0628 15:56:49.271157  7824 hierarchical_allocator_process.hpp:727]
> Offering cpus=2; mem=1024; ports=[31000-32000]; disk=19149 on slave
> 201306281556-143311683-36742-7801-0 to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.271308  7824 hierarchical_allocator_process.hpp:687]
> Performed allocation for slave 201306281556-143311683-36742-7801-0 in
> 156.014us
> I0628 15:56:49.271340  7823 master.hpp:313] Adding offer
> 201306281556-143311683-36742-7801-0 with resources cpus=2; mem=1024;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:49.271406  7823 master.cpp:1239] Sending 1 offers to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.271646  7823 master.cpp:1472] Processing reply for offer
> 201306281556-143311683-36742-7801-0 on slave
> 201306281556-143311683-36742-7801-0 (minerva.apache.org) for framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.271755  7823 master.hpp:291] Adding task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:49.271797  7823 master.cpp:1591] Launching task 0 of framework
> 201306281556-143311683-36742-7801-0000 with resources cpus=1; mem=500 on
> slave 201306281556-143311683-36742-7801-0 (minerva.apache.org)
> I0628 15:56:49.271888  7824 slave.cpp:738] Got assigned task 0 for
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.271987  7823 master.hpp:323] Removing offer
> 201306281556-143311683-36742-7801-0 with resources cpus=2; mem=1024;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:49.271983  7827 hierarchical_allocator_process.hpp:526]
> Framework 201306281556-143311683-36742-7801-0000 left cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 unused on slave
> 201306281556-143311683-36742-7801-0
> I0628 15:56:49.272080  7824 slave.cpp:836] Launching task 0 for framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.272208  7827 hierarchical_allocator_process.hpp:569]
> Framework 201306281556-143311683-36742-7801-0000 filtered slave
> 201306281556-143311683-36742-7801-0 for 5secs
> I0628 15:56:49.273571  7824 paths.hpp:303] Created executor directory
> '/tmp/AllocatorZooKeeperTest_0_SlaveReregistersFirst_63hr3T/slaves/201306281556-143311683-36742-7801-0/frameworks/201306281556-143311683-36742-7801-0000/executors/default/runs/941b9b4e-d6df-438b-b009-aa559a9a8ef1'
> I0628 15:56:49.273752  7824 slave.cpp:947] Queuing task '0' for executor
> default of framework '201306281556-143311683-36742-7801-0000
> I0628 15:56:49.273869  7826 slave.cpp:510] Successfully attached file
> '/tmp/AllocatorZooKeeperTest_0_SlaveReregistersFirst_63hr3T/slaves/201306281556-143311683-36742-7801-0/frameworks/201306281556-143311683-36742-7801-0000/executors/default/runs/941b9b4e-d6df-438b-b009-aa559a9a8ef1'
> I0628 15:56:49.273929  7822 exec.cpp:170] Executor started at:
> executor(38)@67.195.138.8:36742 with pid 7801
> I0628 15:56:49.274046  7825 slave.cpp:1394] Got registration for executor
> 'default' of framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.274160  7825 slave.cpp:1509] Flushing queued task 0 for
> executor 'default' of framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.274209  7824 exec.cpp:194] Executor registered on slave
> 201306281556-143311683-36742-7801-0
> I0628 15:56:49.274381  7824 exec.cpp:258] Executor asked to run task '0'
> I0628 15:56:49.275547  7824 exec.cpp:404] Executor sending status update
> TASK_RUNNING (UUID: fc35916b-9669-40ea-9fde-ecda94d94b88) for task 0 of
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.275620  7824 slave.cpp:1691] Handling status update
> TASK_RUNNING (UUID: fc35916b-9669-40ea-9fde-ecda94d94b88) for task 0 of
> framework 201306281556-143311683-36742-7801-0000 from executor(38)@
> 67.195.138.8:36742
> I0628 15:56:49.275739  7824 status_update_manager.cpp:290] Received status
> update TASK_RUNNING (UUID: fc35916b-9669-40ea-9fde-ecda94d94b88) for task 0
> of framework 201306281556-143311683-36742-7801-0000 with checkpoint=false
> I0628 15:56:49.275774  7824 status_update_manager.cpp:450] Creating
> StatusUpdate stream for task 0 of framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.275908  7824 status_update_manager.cpp:336] Forwarding
> status update TASK_RUNNING (UUID: fc35916b-9669-40ea-9fde-ecda94d94b88) for
> task 0 of framework 201306281556-143311683-36742-7801-0000 to
> [email protected]:36742
> I0628 15:56:49.276011  7822 master.cpp:1022] Status update from slave(100)@
> 67.195.138.8:36742: task 0 of framework
> 201306281556-143311683-36742-7801-0000 is now in state TASK_RUNNING
> I0628 15:56:49.276119  7823 slave.cpp:1802] Status update manager
> successfully handled status update TASK_RUNNING (UUID:
> fc35916b-9669-40ea-9fde-ecda94d94b88) for task 0 of framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.276247  7823 slave.cpp:1808] Sending acknowledgement for
> status update TASK_RUNNING (UUID: fc35916b-9669-40ea-9fde-ecda94d94b88) for
> task 0 of framework 201306281556-143311683-36742-7801-0000 to executor(38)@
> 67.195.138.8:36742
> I0628 15:56:49.276329  7827 exec.cpp:290] Executor received status update
> acknowledgement fc35916b-9669-40ea-9fde-ecda94d94b88 for task 0 of
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.276574  7823 status_update_manager.cpp:360] Received status
> update acknowledgement fc35916b-9669-40ea-9fde-ecda94d94b88 for task 0 of
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.276643  7801 master.cpp:385] Master terminating
> I0628 15:56:49.276729  7829 slave.cpp:1342] Status update manager
> successfully handled status update acknowledgement
> fc35916b-9669-40ea-9fde-ecda94d94b88 for task 0 of framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.276803  7801 master.cpp:207] Shutting down master
> I0628 15:56:49.276968  7829 slave.cpp:1883] [email protected]:36742exited
> W0628 15:56:49.277010  7829 slave.cpp:1886] Master disconnected! Waiting
> for a new master to be elected
> I0628 15:56:49.277026  7823 hierarchical_allocator_process.hpp:412]
> Deactivated framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.277029  7801 master.hpp:303] Removing task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:49.277108  7829 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by [email protected]:36742
> I0628 15:56:49.277307  7829 slave.cpp:1136] Shutting down framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.277358  7829 slave.cpp:2327] Shutting down executor
> 'default' of framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.277366  7827 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=500 (total allocatable: cpus=2; mem=1024;
> ports=[31000-32000]; disk=19149) on slave
> 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:49.277477  7825 exec.cpp:323] Executor asked to shutdown
> I0628 15:56:49.278202  7827 master.cpp:228] Master started on
> 67.195.138.8:36742
> I0628 15:56:49.278245  7827 master.cpp:243] Master ID:
> 201306281556-143311683-36742-7801
> I0628 15:56:49.279021  7823 detector.cpp:420] Master detector
> (scheduler(91)@67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:49.279146  7826 detector.cpp:420] Master detector (slave(100)@
> 67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:49.279909  7824 detector.cpp:234] Master detector (
> [email protected]:36742) connected to ZooKeeper ...
> I0628 15:56:49.285692  7823 detector.cpp:441] Master detector
> (scheduler(91)@67.195.138.8:36742) couldn't find any masters
> W0628 15:56:49.285754  7828 master.cpp:83] No whitelist given. Advertising
> offers for all slaves
> I0628 15:56:49.285801  7826 detector.cpp:441] Master detector (slave(100)@
> 67.195.138.8:36742) couldn't find any masters
> I0628 15:56:49.285879  7825 hierarchical_allocator_process.hpp:295]
> Initializing hierarchical allocator process with master :
> [email protected]:36742
> I0628 15:56:49.285941  7824 detector.cpp:251] Trying to create path
> '/znode' in ZooKeeper
> I0628 15:56:49.286063  7823 sched.cpp:194] No master detected, waiting for
> another master
> I0628 15:56:49.286192  7828 slave.cpp:562] Lost master(s) ... waiting
> I0628 15:56:49.287852  7824 detector.cpp:281] Created ephemeral/sequence
> znode at '/znode/0000000006'
> I0628 15:56:49.288226  7825 detector.cpp:420] Master detector (slave(100)@
> 67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:49.288372  7827 detector.cpp:420] Master detector
> (scheduler(91)@67.195.138.8:36742)  found 1 registered masters
> I0628 15:56:49.288534  7824 detector.cpp:420] Master detector (
> [email protected]:36742)  found 1 registered masters
> I0628 15:56:49.294181  7825 detector.cpp:467] Master detector (slave(100)@
> 67.195.138.8:36742)  got new master pid: [email protected]:36742
> I0628 15:56:49.294378  7827 detector.cpp:467] Master detector
> (scheduler(91)@67.195.138.8:36742)  got new master pid:
> [email protected]:36742
> I0628 15:56:49.294389  7829 slave.cpp:528] New master detected at
> [email protected]:36742
> I0628 15:56:49.294564  7822 sched.cpp:177] New master at
> [email protected]:36742
> I0628 15:56:49.294620  7825 status_update_manager.cpp:155] New master
> detected at [email protected]:36742
> I0628 15:56:49.294565  7824 detector.cpp:467] Master detector (
> [email protected]:36742)  got new master pid: [email protected]:36742
> W0628 15:56:49.294739  7828 master.cpp:918] Ignoring re-register slave
> message from minerva.apache.org since not elected yet
> I0628 15:56:49.294917  7828 master.cpp:526] Elected as master!
> I0628 15:56:50.271054  7827 master.cpp:963] Attempting to re-register
> slave 201306281556-143311683-36742-7801-0 at slave(100)@67.195.138.8:36742(
> minerva.apache.org)
> I0628 15:56:50.271103  7827 master.cpp:1851] Adding slave
> 201306281556-143311683-36742-7801-0 at minerva.apache.org with cpus=2;
> mem=1024; ports=[31000-32000]; disk=19149
> I0628 15:56:50.271298  7827 master.hpp:291] Adding task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:50.271333  7823 slave.cpp:629] Re-registered with master
> [email protected]:36742
> W0628 15:56:50.271366  7827 master.cpp:1943] Possibly orphaned task 0 of
> framework 201306281556-143311683-36742-7801-0000 running on slave
> 201306281556-143311683-36742-7801-0 (minerva.apache.org)
> I0628 15:56:50.271782  7822 hierarchical_allocator_process.hpp:449] Added
> slave 201306281556-143311683-36742-7801-0 (minerva.apache.org) with
> cpus=2; mem=1024; ports=[31000-32000]; disk=19149 (and cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 available)
> I0628 15:56:50.271834  7822 hierarchical_allocator_process.hpp:700] No
> users to allocate resources!
> I0628 15:56:50.271886  7822 hierarchical_allocator_process.hpp:687]
> Performed allocation for slave 201306281556-143311683-36742-7801-0 in
> 51.102us
> I0628 15:56:50.287039  7828 hierarchical_allocator_process.hpp:700] No
> users to allocate resources!
> I0628 15:56:50.287113  7828 hierarchical_allocator_process.hpp:667]
> Performed allocation for 1 slaves in 82.592us
> I0628 15:56:50.295202  7826 master.cpp:604] Re-registering framework
> 201306281556-143311683-36742-7801-0000 at scheduler(91)@67.195.138.8:36742
> I0628 15:56:50.295414  7827 sched.cpp:246] Framework re-registered with
> 201306281556-143311683-36742-7801-0000
> W0628 15:56:50.295446  7822 slave.cpp:1272] Ignoring updating pid for
> framework 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:50.295681  7823 hierarchical_allocator_process.hpp:327] Added
> framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.295753  7823 hierarchical_allocator_process.hpp:727]
> Offering cpus=1; mem=524; ports=[31000-32000]; disk=19149 on slave
> 201306281556-143311683-36742-7801-0 to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.295912  7823 hierarchical_allocator_process.hpp:667]
> Performed allocation for 1 slaves in 193.835us
> I0628 15:56:50.295985  7828 master.hpp:313] Adding offer
> 201306281556-143311683-36742-7801-0 with resources cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:50.296087  7828 master.cpp:1239] Sending 1 offers to framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.296416  7827 sched.cpp:427] Stopping framework
> '201306281556-143311683-36742-7801-0000'
> I0628 15:56:50.296422  7822 master.cpp:385] Master terminating
> I0628 15:56:50.296598  7825 slave.cpp:484] Slave asked to shut down by
> [email protected]:36742
> I0628 15:56:50.296633  7801 master.cpp:207] Shutting down master
> I0628 15:56:50.296675  7825 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by [email protected]:36742
> W0628 15:56:50.296846  7825 slave.cpp:1132] Ignoring shutdown framework
> 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:50.296869  7822 hierarchical_allocator_process.hpp:412]
> Deactivated framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.296880  7801 master.hpp:303] Removing task 0 with resources
> cpus=1; mem=500 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:50.296952  7825 slave.cpp:1883] [email protected]:36742exited
> W0628 15:56:50.297201  7825 slave.cpp:1886] Master disconnected! Waiting
> for a new master to be elected
> I0628 15:56:50.297283  7825 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by [email protected]:36742
> W0628 15:56:50.297334  7825 slave.cpp:1132] Ignoring shutdown framework
> 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:50.297348  7828 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=500 (total allocatable: cpus=1; mem=500; ports=[];
> disk=0) on slave 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.297343  7801 master.hpp:323] Removing offer
> 201306281556-143311683-36742-7801-0 with resources cpus=1; mem=524;
> ports=[31000-32000]; disk=19149 on slave 201306281556-143311683-36742-7801-0
> I0628 15:56:50.297616  7828 hierarchical_allocator_process.hpp:616]
> Recovered cpus=1; mem=524; ports=[31000-32000]; disk=19149 (total
> allocatable: cpus=2; mem=1024; ports=[31000-32000]; disk=19149) on slave
> 201306281556-143311683-36742-7801-0 from framework
> 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.298339  7801 slave.cpp:439] Slave terminating
> I0628 15:56:50.305732  7801 slave.cpp:1111] Asked to shut down framework
> 201306281556-143311683-36742-7801-0000 by @0.0.0.0:0
> W0628 15:56:50.305790  7801 slave.cpp:1132] Ignoring shutdown framework
> 201306281556-143311683-36742-7801-0000 because it is terminating
> I0628 15:56:50.305794  7828 hierarchical_allocator_process.hpp:367]
> Removed framework 201306281556-143311683-36742-7801-0000
> I0628 15:56:50.299312  7826 detector.cpp:420] Master detector (slave(100)@
> 67.195.138.8:36742)  found 0 registered masters
> I0628 15:56:50.305979  7826 detector.cpp:441] Master detector (slave(100)@
> 67.195.138.8:36742) couldn't find any masters
> I0628 15:56:50.306002  7828 hierarchical_allocator_process.hpp:477]
> Removed slave 201306281556-143311683-36742-7801-0
> [       OK ] AllocatorZooKeeperTest/0.SlaveReregistersFirst (2099 ms)
> I0628 15:56:50.307878  7801 zookeeper_test_server.cpp:93] Shutdown
> ZooKeeperTestServer on port 47283
> [----------] 2 tests from AllocatorZooKeeperTest/0 (4154 ms total)
>
> [----------] Global test environment tear-down
> [==========] 196 tests from 37 test cases ran. (176700 ms total)
> [  PASSED  ] 195 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] ReaperTest.TerminatedChildProcess
>
>  1 FAILED TEST
> make[3]: *** [check-local] Error 1
> make[3]: Leaving directory `<
> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/ws/src
> '>
> make[2]: *** [check-am] Error 2
> make[2]: Leaving directory `<
> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/ws/src
> '>
> make[1]: *** [check] Error 2
> make[1]: Leaving directory `<
> https://builds.apache.org/job/Mesos-Trunk-Ubuntu-Build-In-Src-Set-JAVA_HOME/ws/src
> '>
> make: *** [check-recursive] Error 1
> Build step 'Execute shell' marked build as failure
>

Reply via email to