Thanks Deepal, were you unable to close the issue? Cheers, Chris
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Chris Mattmann, Ph.D. Senior Computer Scientist NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA Office: 171-266B, Mailstop: 171-246 Email: [email protected] WWW: http://sunset.usc.edu/~mattmann/ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Adjunct Assistant Professor, Computer Science Department University of Southern California, Los Angeles, CA 90089 USA ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -----Original Message----- From: "Deepal Jayasinghe (JIRA)" <[email protected]> Reply-To: "[email protected]" <[email protected]> Date: Saturday, May 4, 2013 9:24 AM To: "[email protected]" <[email protected]> Subject: [jira] [Issue Comment Deleted] (MESOS-258) mesos-master / mesos-slave => error: [Errno 32] Broken pipe > > [ >https://issues.apache.org/jira/browse/MESOS-258?page=com.atlassian.jira.pl >ugin.system.issuetabpanels:all-tabpanel ] > >Deepal Jayasinghe updated MESOS-258: >------------------------------------ > > Comment: was deleted > >(was: As per Chris Mattmann request I am closing this issue.) > >> mesos-master / mesos-slave => error: [Errno 32] Broken pipe >> ----------------------------------------------------------- >> >> Key: MESOS-258 >> URL: https://issues.apache.org/jira/browse/MESOS-258 >> Project: Mesos >> Issue Type: Question >> Components: master, slave >> Affects Versions: 0.9.0 >> Environment: MacBook Pro / Intel 64bit / OS X 10.6.8 >> Reporter: Robert Poor >> Priority: Trivial >> Labels: masterslave >> Fix For: 0.9.0 >> >> >> [meta comment: Total newcomer to mesos -- pardon what might be a >>trivial problem. (Meta-meta-comment: I couldn't find any pointers to >>forums or user groups where this might be more appropriately posted!)] >> Short form: Working with a fresh and sandboxed build. When I start >>mesos-slave.sh, the master starts throwing [Errno 32] Broken pipe >>errors. Have I misconfigured something? What additional info should I >>gather? >> Full synopsis: >> Did: >> ===== build and test: >> $ svn co https://svn.apache.org/repos/asf/incubator/mesos/trunk >>mesos-trunk >> $ ./bootstrap >> $ ./configure --prefix=$SANDBOX/usr --with-webui >>--with-included-zookeeper >> $ make >> $ make check >> [==========] 172 tests from 33 test cases ran. (59027 ms total) >> [ PASSED ] 172 tests. >> $ # all looks good so far! >> ===== launch mesos-master: >> $ bin/mesos-master.sh >> I0821 07:13:34.352710 1884560576 main.cpp:115] Build: 2012-08-21 >>06:53:30 by r >> I0821 07:13:34.371057 1884560576 main.cpp:116] Starting Mesos master >> I0821 07:13:34.371402 19939328 master.cpp:301] Master started on >>192.168.5.136:5050 >> I0821 07:13:34.371461 19939328 master.cpp:316] Master ID: >>201208210713-2282072256-5050-10221 >> W0821 07:13:34.371687 19402752 master.cpp:77] No whitelist given. >>Advertising offers for all slaves >> I0821 07:13:34.374063 19939328 master.cpp:542] Elected as master! >> I0821 07:13:34.385547 1884560576 webui.cpp:55] Loading webui script at >>'/Users/r/Projects/AmpCamp/packages/mesos-trunk/src/webui/master/webui.py >>' >> I0821 07:13:35.372010 20475904 hierarchical_allocator_process.hpp:537] >>Performed allocation for 0 slaves in 0.03 milliseconds >> Bottle server starting up (using WSGIRefServer())... >> Listening on http://0.0.0.0:8080/ >> ===== verified in web browser. looks goodl >> ===== launch mesos-slave in separate shell: >> $ bin/mesos-slave.sh --master=0.0.0.0:8080 >> I0821 07:15:10.427289 1884560576 main.cpp:123] Creating "process" >>isolation module >> I0821 07:15:10.427921 1884560576 main.cpp:131] Build: 2012-08-21 >>06:53:30 by r >> I0821 07:15:10.427947 1884560576 main.cpp:132] Starting Mesos slave >> W0821 07:15:10.428102 1884560576 slave.cpp:124] Failed to auto-detect >>the size of main memory, defaulting to 1024 MB >> I0821 07:15:10.428475 19427328 slave.cpp:173] Slave started on >>1)@192.168.5.136:52298 >> I0821 07:15:10.428505 19427328 slave.cpp:174] Slave resources: cpus=4; >>mem=1024 >> I0821 07:15:10.430325 19427328 slave.cpp:342] New master detected at >>[email protected]:8080 >> I0821 07:15:10.431522 18890752 slave.cpp:1124] Process exited: >>@0.0.0.0:0 >> W0821 07:15:10.431581 18890752 slave.cpp:1127] WARNING! Master >>disconnected! Waiting for a new master to be elected. >> I0821 07:15:10.442154 1884560576 webui.cpp:55] Loading webui script at >>'/Users/r/Projects/AmpCamp/packages/mesos-trunk/src/webui/slave/webui.py' >> Bottle server starting up (using WSGIRefServer())... >> Listening on http://0.0.0.0:8081/ >> Use Ctrl-C to quit. >> ===== back in the mesos-master window, I'm now seeing this: >> I0821 07:17:01.445639 19402752 hierarchical_allocator_process.hpp:537] >>Performed allocation for 0 slaves in 0.038 milliseconds >> Traceback (most recent call last): >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/w >>sgiref/handlers.py", line 94, in run >> self.finish_response() >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/w >>sgiref/handlers.py", line 135, in finish_response >> self.write(data) >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/w >>sgiref/handlers.py", line 218, in write >> self.send_headers() >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/w >>sgiref/handlers.py", line 274, in send_headers >> self.send_preamble() >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/w >>sgiref/handlers.py", line 200, in send_preamble >> 'Date: %s\r\n' % format_date_time(time.time()) >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/s >>ocket.py", line 297, in write >> self.flush() >> File >>"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/s >>ocket.py", line 284, in flush >> self._sock.sendall(buffer) >> error: [Errno 32] Broken pipe >> ===== When I ^c out of the mesos-slave, the errors stop > >-- >This message is automatically generated by JIRA. >If you think it was sent incorrectly, please contact your JIRA >administrators >For more information on JIRA, see: http://www.atlassian.com/software/jira
