Hey Craig, can you describe how you're running this? Are you running
the master / slave individually from an installation?

The slave log will help diagnose this problem.

On Thu, Mar 7, 2013 at 9:34 AM, Craig Vanderborgh
<[email protected]> wrote:
> Just found this in mapred-site.xml:
>
> #
> # Make sure to uncomment the 'mapred.mesos.executor' property,
> # when running the Hadoop JobTracker on a real Mesos cluster.
> # NOTE: You need to MANUALLY upload the Mesos executor bundle
> # to the location that is set as the value of this property.
> #  <property>
> #    <name>mapred.mesos.executor</name>
> #    <value>hdfs://hdfs.name.node:port/hadoop.zip</value>
> #  </property>
> #
>
> Is my pseudodistributed configuration a "real Mesos cluster"?  Could the
> problem be that I haven't done this?
>
> THANKS!
> Craig
>
> On Thu, Mar 7, 2013 at 9:52 AM, Craig Vanderborgh <
> [email protected]> wrote:
>
>> Does this mean that there is a problem with the slave configuration?  I'm
>> running pseudodistributed (1 Mesos master and 1 Mesos slave on the same
>> host).
>>
>> FWIW: Spark jobs run fine on this configuration.
>>
>> Craig
>>
>>
>> On Thu, Mar 7, 2013 at 9:50 AM, Craig Vanderborgh <
>> [email protected]> wrote:
>>
>>> Okay - in order to get the jobtracker to come up I had to copy
>>> conf/core-site.xml and conf/hdfs-site.xml to the location where I unpacked
>>> hadoop.tar.gz.  The jobtracker now starts up and accepts jobs.
>>>
>>> So I then tried to run the "PI" benchmark to test the installation.  The
>>> output from PI looks like this:
>>>
>>> [craigv@m5 benchmarks]$ pi.sh
>>> Number of Maps  = 10
>>> Samples per Map = 1000000000
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Starting Job
>>> 13/03/07 09:43:57 WARN mapred.JobClient: Use GenericOptionsParser for
>>> parsing the arguments. Applications should implement Tool for the same.
>>> 13/03/07 09:43:57 INFO mapred.FileInputFormat: Total input paths to
>>> process : 10
>>> 13/03/07 09:43:57 INFO mapred.JobClient: Running job:
>>> job_201303070943_0001
>>> 13/03/07 09:43:59 INFO mapred.JobClient:  map 0% reduce 0%
>>>
>>> while the Mesosized jobtracker prints out the following endlessly:
>>>
>>> 13/03/07 09:49:34 INFO mapred.MesosScheduler: Launching task
>>> Task_Tracker_328 on http://m5:31000
>>> 13/03/07 09:49:34 INFO mapred.MesosScheduler: Unable to fully satisfy
>>> needed map/reduce slots: 8 map slots remaining
>>> 13/03/07 09:49:35 INFO mapred.MesosScheduler: Status update of
>>> Task_Tracker_328 to TASK_LOST with message Executor exited
>>> 13/03/07 09:49:35 INFO mapred.MesosScheduler: JobTracker Status
>>>       Pending Map Tasks: 10
>>>    Pending Reduce Tasks: 1
>>>          Idle Map Slots: 0
>>>       Idle Reduce Slots: 0
>>>      Inactive Map Slots: 2 (launched but no hearbeat yet)
>>>   Inactive Reduce Slots: 2 (launched but no hearbeat yet)
>>>        Needed Map Slots: 10
>>>     Needed Reduce Slots: 1
>>>
>>> So it looks like the tasktracker tries to start the tasks and then fails.
>>>  What should I try now?
>>>
>>> Craig
>>>
>>> On Wed, Mar 6, 2013 at 5:58 PM, Craig Vanderborgh <
>>> [email protected]> wrote:
>>>
>>>> Hi Vinod -
>>>>
>>>> You mentioned configuration changes for mapred-site.xml.  Do I also have
>>>> to modify hadoop-env.sh, to specify JAVA_HOME, PROTOBUF_JAR, MESOS_JAR,
>>>> MESOS_NATIVE_LIBRARY, and HADOOP_CLASSPATH as shown here:
>>>>
>>>> http://files.meetup.com/3138542/*mesos*-spark-meetup-04-05-12.pptx
>>>>
>>>> Please advise..
>>>>
>>>> Thanks,
>>>> Craig Vanderborgh
>>>>
>>>
>>>
>>

Reply via email to