>
> 13/03/07 09:49:34 INFO mapred.MesosScheduler: Launching task
> Task_Tracker_328 on http://m5:31000
> 13/03/07 09:49:34 INFO mapred.MesosScheduler: Unable to fully satisfy
> needed map/reduce slots: 8 map slots remaining
> 13/03/07 09:49:35 INFO mapred.MesosScheduler: Status update of
> Task_Tracker_328 to TASK_LOST with message Executor exited
> 13/03/07 09:49:35 INFO mapred.MesosScheduler: JobTracker Status
>       Pending Map Tasks: 10
>    Pending Reduce Tasks: 1
>          Idle Map Slots: 0
>       Idle Reduce Slots: 0
>      Inactive Map Slots: 2 (launched but no hearbeat yet)
>   Inactive Reduce Slots: 2 (launched but no hearbeat yet)
>        Needed Map Slots: 10
>     Needed Reduce Slots: 1
>

Looks like the resources offered by mesos slave are not sufficient to
launch task trackers. Can you see in master/slave log and see what are
the resources being offered by the slave? Also, there should be a line
in the jobtracker file that would've said "Declining offer...".  Can
you tell us what it says?

Finally, it looks like the Task Trackers are getting LOST when being
launched on the slave. You have to look into the slave log and/or
executor log. By default the executor logs should be in
"/tmp/mesos/slaves/<slave-id>/frameworks/<framework-id>/executors/<executor-id>/runs/latest".
You can get the exact executor work directory path by looking in the
mesos slave for a line that says "Created executor work directory...".

Let me know if that helps to figure out the problem.

Vinod



> So it looks like the tasktracker tries to start the tasks and then fails.
>  What should I try now?
>
> Craig
>
> On Wed, Mar 6, 2013 at 5:58 PM, Craig Vanderborgh <
> [email protected]> wrote:
>
> > Hi Vinod -
> >
> > You mentioned configuration changes for mapred-site.xml.  Do I also have
> > to modify hadoop-env.sh, to specify JAVA_HOME, PROTOBUF_JAR, MESOS_JAR,
> > MESOS_NATIVE_LIBRARY, and HADOOP_CLASSPATH as shown here:
> >
> > http://files.meetup.com/3138542/*mesos*-spark-meetup-04-05-12.pptx
> >
> > Please advise..
> >
> > Thanks,
> > Craig Vanderborgh
> >

Reply via email to