Hi,
I have ran JobTracker successfully!
But, JobTracker can not launch the executor and tasktracker successfully.
The log on mesos-slave is: 
I0115 17:13:31.821723 16796 gc.cpp:56] Scheduling 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002/executors/executor_Task_Tracker_160/runs/841d4dc0-bdda-40ee-9389-335effaf73d1'
 for removal
I0115 17:13:31.821902 16796 gc.cpp:56] Scheduling 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002/executors/executor_Task_Tracker_160'
 for removal
I0115 17:13:31.821946 16797 slave.cpp:2360] Cleaning up framework 
201401151627-524069056-5050-453-0002
I0115 17:13:31.822020 16793 status_update_manager.cpp:262] Closing status 
update streams for framework 201401151627-524069056-5050-453-0002
I0115 17:13:31.822059 16796 gc.cpp:56] Scheduling 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002'
 for removal
W0115 17:13:31.838464 16792 process_isolator.cpp:265] Failed to kill the 
process tree rooted at pid 30492: Failed to find process 30492
I0115 17:13:31.838608 16792 process_isolator.cpp:298] Asked to update resources 
for an unknown/killed executor 'executor_Task_Tracker_160' of framework 
201401151627-524069056-5050-453-0002
I0115 17:13:31.928426 16796 slave.cpp:2519] Framework 
201401151627-524069056-5050-453-0002 seems to have exited. Ignoring 
registration timeout for executor 'executor_Task_Tracker_131'
I0115 17:13:31.985304 16794 slave.cpp:779] Got assigned task Task_Tracker_161 
for framework 201401151627-524069056-5050-453-0002
I0115 17:13:31.986101 16799 gc.cpp:84] Unscheduling 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002'
 for removal
I0115 17:13:31.986351 16798 slave.cpp:890] Launching task Task_Tracker_161 for 
framework 201401151627-524069056-5050-453-0002
I0115 17:13:31.987820 16798 paths.hpp:336] Created executor directory 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002/executors/executor_Task_Tracker_161/runs/083b118d-b820-4b48-8d41-d0a63f0d173e'
I0115 17:13:31.988062 16798 slave.cpp:1001] Queuing task 'Task_Tracker_161' for 
executor executor_Task_Tracker_161 of framework 
'201401151627-524069056-5050-453-0002
I0115 17:13:31.988062 16794 process_isolator.cpp:100] Launching 
executor_Task_Tracker_161 (cd hadoop-2* && env ; ./bin/hadoop 
org.apache.hadoop.mapred.MesosExecutor) in 
/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002/executors/executor_Task_Tracker_161/runs/083b118d-b820-4b48-8d41-d0a63f0d173e
 with resources cpus(*):1; mem(*):1024' for framework 
201401151627-524069056-5050-453-0002
I0115 17:13:31.988178 16798 slave.cpp:532] Successfully attached file 
'/tmp/mesos/slaves/201401151627-524069056-5050-453-0/frameworks/201401151627-524069056-5050-453-0002/executors/executor_Task_Tracker_161/runs/083b118d-b820-4b48-8d41-d0a63f0d173e'
I0115 17:13:31.989446 16794 process_isolator.cpp:163] Forked executor at 30527
I0115 17:13:33.819115 16792 process_isolator.cpp:479] Telling slave of 
terminated executor 'executor_Task_Tracker_161' of framework 
201401151627-524069056-5050-453-0002
I0115 17:13:33.819387 16798 slave.cpp:2158] Executor 
'executor_Task_Tracker_161' of framework 201401151627-524069056-5050-453-0002 
has exited with status 255
I0115 17:13:33.821244 16798 slave.cpp:1778] Handling status update TASK_LOST 
(UUID: 58c591f3-89cb-4e3d-9f99-59a26d5b524a) for task Task_Tracker_161 of 
framework 201401151627-524069056-5050-453-0002 from @0.0.0.0:0
I0115 17:13:33.821667 16797 status_update_manager.cpp:300] Received status 
update TASK_LOST (UUID: 58c591f3-89cb-4e3d-9f99-59a26d5b524a) for task 
Task_Tracker_161 of framework 201401151627-524069056-5050-453-0002
I0115 17:13:33.821872 16797 status_update_manager.cpp:351] Forwarding status 
update TASK_LOST (UUID: 58c591f3-89cb-4e3d-9f99-59a26d5b524a) for task 
Task_Tracker_161 of framework 201401151627-524069056-5050-453-0002 to 
master@192.168.60.31:5050
I0115 17:13:33.823534 16796 status_update_manager.cpp:375] Received status 
update acknowledgement (UUID: 58c591f3-89cb-4e3d-9f99-59a26d5b524a) for task 
Task_Tracker_161 of framework 201401151627-524069056-5050-453-0002
I0115 17:13:33.823809 16794 slave.cpp:2288] Cleaning up executor 
'executor_Task_Tracker_161' of framework 201401151627-524069056-5050-453-0002

The log on mesos-master is:
I0115 17:17:34.196712   455 master.cpp:1448] Sending 3 offers to framework 
201401151627-524069056-5050-453-0002
I0115 17:17:34.215025   455 master.cpp:1685] Processing reply for offer 
201401151627-524069056-5050-453-1072 on slave 201401151627-524069056-5050-453-3 
(hadoop03.ihep.ac.cn) for framework 201401151627-524069056-5050-453-0002
I0115 17:17:34.215265   455 master.hpp:318] Adding task Task_Tracker_280 with 
resources cpus(*):1; mem(*):1024; disk(*):1024; ports(*):[31000-31000, 
31001-31001] on slave 201401151627-524069056-5050-453-3 (hadoop03.ihep.ac.cn)
I0115 17:17:34.215384   455 master.cpp:1805] Launching task Task_Tracker_280 of 
framework 201401151627-524069056-5050-453-0002 with resources cpus(*):1; 
mem(*):1024; disk(*):1024; ports(*):[31000-31000, 31001-31001] on slave 
201401151627-524069056-5050-453-3 (hadoop03.ihep.ac.cn)
I0115 17:17:34.215953   461 hierarchical_allocator_process.hpp:551] Framework 
201401151627-524069056-5050-453-0002 filtered slave 
201401151627-524069056-5050-453-3 for 5secs
I0115 17:17:34.216197   455 master.cpp:1685] Processing reply for offer 
201401151627-524069056-5050-453-1073 on slave 201401151627-524069056-5050-453-2 
(hadoop05.ihep.ac.cn) for framework 201401151627-524069056-5050-453-0002
I0115 17:17:34.216398   455 master.cpp:1685] Processing reply for offer 
201401151627-524069056-5050-453-1074 on slave 201401151627-524069056-5050-453-1 
(hadoop04.ihep.ac.cn) for framework 201401151627-524069056-5050-453-0002
I0115 17:17:34.216512   459 hierarchical_allocator_process.hpp:551] Framework 
201401151627-524069056-5050-453-0002 filtered slave 
201401151627-524069056-5050-453-2 for 5secs
I0115 17:17:34.216637   459 hierarchical_allocator_process.hpp:551] Framework 
201401151627-524069056-5050-453-0002 filtered slave 
201401151627-524069056-5050-453-1 for 5secs
I0115 17:17:35.741317   461 master.cpp:1310] Executor executor_Task_Tracker_280 
of framework 201401151627-524069056-5050-453-0002 on slave 
201401151627-524069056-5050-453-3 (hadoop03.ihep.ac.cn) exited with status 65280
I0115 17:17:35.741543   461 master.cpp:1214] Status update TASK_LOST (UUID: 
c9b1ddb6-86e6-4205-aa89-8566aeee5506) for task Task_Tracker_280 of framework 
201401151627-524069056-5050-453-0002 from slave(1)@192.168.60.106:5051
I0115 17:17:35.741745   461 master.hpp:331] Removing task Task_Tracker_280 with 
resources cpus(*):1; mem(*):1024; disk(*):1024; ports(*):[31000-31000, 
31001-31001] on slave 201401151627-524069056-5050-453-3 (hadoop03.ihep.ac.cn)

The log on JobTracker is:
2014-01-15 17:17:34,198 INFO org.apache.hadoop.mapred.ResourcePolicy: Launching 
task Task_Tracker_280 on http://hadoop03.ihep.ac.cn:31000 with mapSlots=1 
reduceSlots=0
2014-01-15 17:17:34,198 INFO org.apache.hadoop.mapred.ResourcePolicy: URI: 
hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz, name: 
hadoop-2.0.0-mr1-cdh4.2.2.tar.gz
2014-01-15 17:17:34,214 INFO org.apache.hadoop.mapred.ResourcePolicy: Satisfied 
map and reduce slots needed.
2014-01-15 17:17:35,742 INFO org.apache.hadoop.mapred.MesosScheduler: Status 
update of Task_Tracker_280 to TASK_LOST with message Executor terminated
2014-01-15 17:17:35,742 INFO org.apache.hadoop.mapred.MesosScheduler: Removing 
terminated TaskTracker: http://hadoop03.ihep.ac.cn:31000
2014-01-15 17:17:36,199 INFO org.apache.hadoop.mapred.ResourcePolicy: 
JobTracker Status
      Pending Map Tasks: 1
   Pending Reduce Tasks: 0
      Running Map Tasks: 0
   Running Reduce Tasks: 0
         Idle Map Slots: 0
      Idle Reduce Slots: 0
     Inactive Map Slots: 0 (launched but no hearbeat yet)
  Inactive Reduce Slots: 0 (launched but no hearbeat yet)
       Needed Map Slots: 1
    Needed Reduce Slots: 0
     Unhealthy Trackers: 0
2014-01-15 17:17:36,199 INFO org.apache.hadoop.mapred.ResourcePolicy: Launching 
task Task_Tracker_281 on http://hadoop03.ihep.ac.cn:31000 with mapSlots=1 
reduceSlots=0
2014-01-15 17:17:36,199 INFO org.apache.hadoop.mapred.ResourcePolicy: URI: 
hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz, name: 
hadoop-2.0.0-mr1-cdh4.2.2.tar.gz
2014-01-15 17:17:36,215 INFO org.apache.hadoop.mapred.ResourcePolicy: Satisfied 
map and reduce slots needed.
2014-01-15 17:17:37,742 INFO org.apache.hadoop.mapred.MesosScheduler: Status 
update of Task_Tracker_281 to TASK_LOST with message Executor terminated
2014-01-15 17:17:37,743 INFO org.apache.hadoop.mapred.MesosScheduler: Removing 
terminated TaskTracker: http://hadoop03.ihep.ac.cn:31000
2014-01-15 17:17:38,201 INFO org.apache.hadoop.mapred.ResourcePolicy: 
JobTracker Status
      Pending Map Tasks: 1
   Pending Reduce Tasks: 0
      Running Map Tasks: 0
   Running Reduce Tasks: 0
         Idle Map Slots: 0
      Idle Reduce Slots: 0
       Needed Map Slots: 1
     Inactive Map Slots: 0 (launched but no hearbeat yet)
  Inactive Reduce Slots: 0 (launched but no hearbeat yet)
    Needed Reduce Slots: 0
     Unhealthy Trackers: 0

Please tell me the reason that the taskTracker can not launch

> -----原始邮件-----
> 发件人: "Adam Bordelon" <a...@mesosphere.io>
> 发送时间: 2014年1月15日 星期三
> 收件人: dev@mesos.apache.org
> 抄送: 
> 主题: Re: Re: Re: How to run hadoop Jobtracker
> 
>  /etc/init.d/hadoop-0.20-mapreduce-jobtracker should be a shell script.
> Look inside and see what the 'start' action is doing behind the scenes.
> Chances are it's updating a library path and other environment settings
> that you won't get just from running 'hadoop jobtracker'
> 
> 
> On Tue, Jan 14, 2014 at 10:59 PM, HUO Jing <huoj...@ihep.ac.cn> wrote:
> 
> > I find out that even without mesos, I can not start jobtracker with
> > command: /usr/lib/hadoop-0.20-mapreduce/bin/hadoop jobtracker
> > The error is :
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > org/apache/commons/logging/LogFactory
> >         at
> > org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:150)
> >         at
> > org.apache.hadoop.mapred.JobTracker.<clinit>(JobTracker.java:131)
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.commons.logging.LogFactory
> >         at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> >         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> >         ... 2 more
> > Could not find the main class: org.apache.hadoop.mapred.JobTracker.
> >  Program will exit.
> >
> > But if I use the command : /etc/init.d/hadoop-0.20-mapreduce-jobtracker
> > start
> > It can run successfully.
> >
> > Does anyone know why this happens, and tell me how to deal with it?
> >
> > Thank you very much
> >
> > > -----原始邮件-----
> > > 发件人: "Adam Bordelon" <a...@mesosphere.io>
> > > 发送时间: 2014年1月15日 星期三
> > > 收件人: dev@mesos.apache.org
> > > 抄送: mesos-dev <mesos-...@incubator.apache.org>
> > > 主题: Re: Re: How to run hadoop Jobtracker
> > >
> > > You're much further now; JT is actually starting. These Hadoop errors
> > don't
> > > look directly related to Mesos, but I'll try to help anyway:
> > > 1) Permissions errors trying to chmod a file as user:mapred. You could
> > try
> > > running as root if you don't care about security, or dig into the
> > > jobtracker logs to see what file/directory it was trying to chmod when it
> > > failed. That should give us a clue.
> > > 14/01/15 13:09:15 ERROR security.UserGroupInformation:
> > > PriviledgedActionException as:mapred (auth:SIMPLE) cause:ENOENT: No such
> > > file or directory
> > > 14/01/15 13:09:15 WARN mapred.JobTracker: Error starting tracker: ENOENT:
> > > No such file or directory
> > > at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
> > >
> > > 2) Something is already running on your JT node at port 8021. Run
> > "netstat
> > > -an |grep 8021" (without running JT yet) to see what's on port 8021. If
> > > there's already something on port 8021, then give hadoop/JT a different
> > > port to use. If not, then it's probably a problem with the JT
> > > restarting/rebinding after the first error above.
> > > 14/01/15 13:09:16 FATAL mapred.JobTracker: java.net.BindException:
> > Problem
> > > binding to hadoop06.ihep.ac.cn/192.168.60.31:8021 : Address already in
> > use
> > >
> > > 3) Also, I notice that your configuration for mapred.job.tracker is set
> > to
> > > localhost:9001, but JT is starting up with port 8021. Perhaps that's a
> > port
> > > just for RpcMetrics, but it makes me wonder if your configuration is
> > > actually being read. What path/file are you setting the config in?
> > > 14/01/15 13:09:15 INFO ipc.Server: Starting Socket Reader #1 for port
> > 8021/
> > > 14/01/15 13:09:15 INFO metrics.RpcMetrics: Initializing RPC Metrics with
> > > hostName=JobTracker, port=8021
> > >
> > >
> > > On Tue, Jan 14, 2014 at 9:29 PM, HUO Jing <huoj...@ihep.ac.cn> wrote:
> > >
> > > > When I try this command"MESOS_NATIVE_LIBRARY=/usr/local/lib/
> > > > libmesos-0.14.0.so hadoop jobtracker"
> > > > There are some errors:
> > > > 14/01/15 13:09:14 INFO mapred.JobTracker: STARTUP_MSG:
> > > > /************************************************************
> > > > STARTUP_MSG: Starting JobTracker
> > > > STARTUP_MSG:   host = hadoop06.ihep.ac.cn/192.168.60.31
> > > > STARTUP_MSG:   args = []
> > > > STARTUP_MSG:   version = 0.20.2-cdh3u5
> > > > STARTUP_MSG:   build = git://
> > > > hadoop03.ihep.ac.cn/publicfs/cc/zangds/dmdp/hadoop-0.20.2-cdh3u5-hce-r ;
> > > > compiled by 'zangds' on Sun Mar 24 23:36:42 CST 2013
> > > > ************************************************************/
> > > > 14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Updating the current master key for generating delegation tokens
> > > > 14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Starting expired delegation token remover thread,
> > > > tokenRemoverScanInterval=60 min(s)
> > > > 14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Updating the current master key for generating delegation tokens
> > > > 14/01/15 13:09:15 INFO mapred.JobTracker: Scheduler configured with
> > > > (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> > limitMaxMemForMapTasks,
> > > > limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> > > > 14/01/15 13:09:15 INFO util.HostsFileReader: Refreshing hosts
> > > > (include/exclude) list
> > > > 14/01/15 13:09:15 INFO mapred.JobTracker: Starting jobtracker with
> > owner
> > > > as mapred
> > > > 14/01/15 13:09:15 INFO ipc.Server: Starting Socket Reader #1 for port
> > 8021
> > > > 14/01/15 13:09:15 INFO metrics.RpcMetrics: Initializing RPC Metrics
> > with
> > > > hostName=JobTracker, port=8021
> > > > 14/01/15 13:09:15 INFO metrics.RpcDetailedMetrics: Initializing RPC
> > > > Metrics with hostName=JobTracker, port=8021
> > > > 14/01/15 13:09:15 INFO mortbay.log: Logging to
> > > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > > org.mortbay.log.Slf4jLog
> > > > 14/01/15 13:09:15 INFO http.HttpServer: Added global filtersafety
> > > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > > > 14/01/15 13:09:15 INFO util.NativeCodeLoader: Loaded the native-hadoop
> > > > library
> > > > 14/01/15 13:09:15 ERROR security.UserGroupInformation:
> > > > PriviledgedActionException as:mapred (auth:SIMPLE) cause:ENOENT: No
> > such
> > > > file or directory
> > > > 14/01/15 13:09:15 WARN mapred.JobTracker: Error starting tracker:
> > ENOENT:
> > > > No such file or directory
> > > >         at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
> > > >         at
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:521)
> > > >         at
> > > >
> > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
> > > >         at
> > > > org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:195)
> > > >         at
> > org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:491)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1852)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1849)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >         at
> > > >
> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1849)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724)
> > > >         at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297)
> > > >         at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499)
> > > >
> > > > 14/01/15 13:09:16 INFO security.UserGroupInformation: JAAS
> > Configuration
> > > > already set up for Hadoop, not re-installing.
> > > > 14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Updating the current master key for generating delegation tokens
> > > > 14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Starting expired delegation token remover thread,
> > > > tokenRemoverScanInterval=60 min(s)
> > > > 14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager:
> > > > Updating the current master key for generating delegation tokens
> > > > 14/01/15 13:09:16 INFO mapred.JobTracker: Scheduler configured with
> > > > (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> > limitMaxMemForMapTasks,
> > > > limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> > > > 14/01/15 13:09:16 INFO util.HostsFileReader: Refreshing hosts
> > > > (include/exclude) list
> > > > 14/01/15 13:09:16 INFO mapred.JobTracker: Starting jobtracker with
> > owner
> > > > as mapred
> > > > 14/01/15 13:09:16 FATAL mapred.JobTracker: java.net.BindException:
> > Problem
> > > > binding to hadoop06.ihep.ac.cn/192.168.60.31:8021 : Address already
> > in use
> > > >         at org.apache.hadoop.ipc.Server.bind(Server.java:231)
> > > >         at
> > org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:320)
> > > >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1534)
> > > >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:539)
> > > >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:500)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1817)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724)
> > > >         at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297)
> > > >         at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289)
> > > >         at
> > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499)
> > > > Caused by: java.net.BindException: Address already in use
> > > >         at sun.nio.ch.Net.bind(Native Method)
> > > >         at
> > > >
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
> > > >         at
> > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> > > >         at org.apache.hadoop.ipc.Server.bind(Server.java:229)
> > > >         ... 9 more
> > > >
> > > > 14/01/15 13:09:16 INFO mapred.JobTracker: SHUTDOWN_MSG:
> > > > /************************************************************
> > > > SHUTDOWN_MSG: Shutting down JobTracker at
> > > > hadoop06.ihep.ac.cn/192.168.60.31
> > > > ************************************************************/
> > > >
> > > > please tell me what's wrong.
> > > >
> > > > My hadoop version is CDH4.5.0, installed from yum.
> > > > I put the hadoop-mesos-0.0.5.jar in
> > /usr/lib/hadoop-0.20-mapreduce/lib/  ,
> > > > and  make a tar package.
> > > > put the package hadoop-0.20-mapreduce.tar.gz to hdfs.
> > > > and then, change the configuration:
> > > > <property>
> > > >   <name>mapred.job.tracker</name>
> > > >   <value>localhost:9001</value>
> > > > </property>
> > > > <property>
> > > >   <name>mapred.jobtracker.taskScheduler</name>
> > > >   <value>org.apache.hadoop.mapred.MesosScheduler</value>
> > > > </property>
> > > > <property>
> > > >   <name>mapred.mesos.taskScheduler</name>
> > > >   <value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
> > > > </property>
> > > > <property>
> > > >   <name>mapred.mesos.master</name>
> > > >   <value>localhost:5050</value>
> > > > </property>
> > > > <property>
> > > >   <name>mapred.mesos.executor.uri</name>
> > > >   <value>hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz</value>
> > > > </property>
> > > >
> > > > But this is not work, please help me!
> > > >
> > > > > -----原始邮件-----
> > > > > 发件人: "Adam Bordelon" <a...@mesosphere.io>
> > > > > 发送时间: 2014年1月15日 星期三
> > > > > 收件人: dev@mesos.apache.org
> > > > > 抄送: mesos-dev <mesos-...@incubator.apache.org>
> > > > > 主题: Re: How to run hadoop Jobtracker
> > > > >
> > > > > Try running
> > "MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos-0.14.0.sohadoop
> > > > > jobtracker"
> > > > > The primary executable to run is the 'hadoop' executable, but it
> > needs to
> > > > > know where to find MESOS_NATIVE_LIBRARY, so we set that environment
> > > > > variable on the command-line first. You could set it in other ways
> > > > instead
> > > > > (in that user's .bashrc or by creating a wrapper around 'hadoop' that
> > > > sets
> > > > > the variable before launching 'hadoop').
> > > > > You are very close to having Hadoop running on top of Mesos.
> > > > > Good luck!
> > > > > -Adam-
> > > > >
> > > > >
> > > > > On Tue, Jan 14, 2014 at 6:47 AM, HUO Jing <huoj...@ihep.ac.cn>
> > wrote:
> > > > >
> > > > > > Hi,
> > > > > > I have installed Mesos and Hadoop CDH4.5.0, changed the
> > > > mapred-site.xml,
> > > > > > and packaged hadoop-mesos-0.0.5.jar with hadoop and upload it to
> > hdfs.
> > > > In a
> > > > > > word, I have done everything in this page:
> > > > https://github.com/mesos/hadoop
> > > > > > .
> > > > > > but when I try to run jobtracker with command:
> > > > > > bash-3.2$ /usr/local/lib/libmesos-0.14.0.so hadoop jobtracker
> > > > > > It says:Segmentation fault
> > > > > > please tell me how to deal with this.
> > > > > >
> > > > > >
> > > > > > Huojing
> > > > > >
> > > >
> > > >
> >
> >

Reply via email to