It says to set your Hadoop HOME variable to the absolute location of the Hadoop jar. Did you set HADOOP_HOME or did something else? If you set it, you should change it to the location of the jar file, as I'm pretty sure it won't recursively search for the jar file in the folders under your current set home. On Jan 15, 2014 5:54 AM, "HUO Jing" <huoj...@ihep.ac.cn> wrote:
> I change the > <property> > <name>mapred.mesos.executor.uri</name> > <value>hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz</value> > </property> > to > <property> > <name>mapred.mesos.executor.uri</name> > <value>hdfs://hadoop06.ihep.ac.cn:8020/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz > </value> > </property> > so, the slave can download the hadoop-2.0.0-mr1-cdh4.2.2.tar.gz > But the task Error is like this: > +================================================================+ > | Error: HADOOP_HOME is not set correctly | > +----------------------------------------------------------------+ > | Please set your HADOOP_HOME variable to the absolute path of | > | the directory that contains hadoop-core-VERSION.jar or | > | share/hadoop/mapreduce1/hadoop-core-VERSION.jar. | > +================================================================+ > and, I run the command "echo $HADOOP_HOME", the result is > "/usr/lib/hadoop" > > please help me out > > > -----原始邮件----- > > 发件人: "Adam Bordelon" <a...@mesosphere.io> > > 发送时间: 2014年1月15日 星期三 > > 收件人: dev@mesos.apache.org > > 抄送: > > 主题: Re: Re: Re: How to run hadoop Jobtracker > > > > /etc/init.d/hadoop-0.20-mapreduce-jobtracker should be a shell script. > > Look inside and see what the 'start' action is doing behind the scenes. > > Chances are it's updating a library path and other environment settings > > that you won't get just from running 'hadoop jobtracker' > > > > > > On Tue, Jan 14, 2014 at 10:59 PM, HUO Jing <huoj...@ihep.ac.cn> wrote: > > > > > I find out that even without mesos, I can not start jobtracker with > > > command: /usr/lib/hadoop-0.20-mapreduce/bin/hadoop jobtracker > > > The error is : > > > Exception in thread "main" java.lang.NoClassDefFoundError: > > > org/apache/commons/logging/LogFactory > > > at > > > org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:150) > > > at > > > org.apache.hadoop.mapred.JobTracker.<clinit>(JobTracker.java:131) > > > Caused by: java.lang.ClassNotFoundException: > > > org.apache.commons.logging.LogFactory > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:202) > > > at java.security.AccessController.doPrivileged(Native Method) > > > at java.net.URLClassLoader.findClass(URLClassLoader.java:190) > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:306) > > > at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:247) > > > ... 2 more > > > Could not find the main class: org.apache.hadoop.mapred.JobTracker. > > > Program will exit. > > > > > > But if I use the command : /etc/init.d/hadoop-0.20-mapreduce-jobtracker > > > start > > > It can run successfully. > > > > > > Does anyone know why this happens, and tell me how to deal with it? > > > > > > Thank you very much > > > > > > > -----原始邮件----- > > > > 发件人: "Adam Bordelon" <a...@mesosphere.io> > > > > 发送时间: 2014年1月15日 星期三 > > > > 收件人: dev@mesos.apache.org > > > > 抄送: mesos-dev <mesos-...@incubator.apache.org> > > > > 主题: Re: Re: How to run hadoop Jobtracker > > > > > > > > You're much further now; JT is actually starting. These Hadoop errors > > > don't > > > > look directly related to Mesos, but I'll try to help anyway: > > > > 1) Permissions errors trying to chmod a file as user:mapred. You > could > > > try > > > > running as root if you don't care about security, or dig into the > > > > jobtracker logs to see what file/directory it was trying to chmod > when it > > > > failed. That should give us a clue. > > > > 14/01/15 13:09:15 ERROR security.UserGroupInformation: > > > > PriviledgedActionException as:mapred (auth:SIMPLE) cause:ENOENT: No > such > > > > file or directory > > > > 14/01/15 13:09:15 WARN mapred.JobTracker: Error starting tracker: > ENOENT: > > > > No such file or directory > > > > at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) > > > > > > > > 2) Something is already running on your JT node at port 8021. Run > > > "netstat > > > > -an |grep 8021" (without running JT yet) to see what's on port 8021. > If > > > > there's already something on port 8021, then give hadoop/JT a > different > > > > port to use. If not, then it's probably a problem with the JT > > > > restarting/rebinding after the first error above. > > > > 14/01/15 13:09:16 FATAL mapred.JobTracker: java.net.BindException: > > > Problem > > > > binding to hadoop06.ihep.ac.cn/192.168.60.31:8021 : Address already > in > > > use > > > > > > > > 3) Also, I notice that your configuration for mapred.job.tracker is > set > > > to > > > > localhost:9001, but JT is starting up with port 8021. Perhaps that's > a > > > port > > > > just for RpcMetrics, but it makes me wonder if your configuration is > > > > actually being read. What path/file are you setting the config in? > > > > 14/01/15 13:09:15 INFO ipc.Server: Starting Socket Reader #1 for port > > > 8021/ > > > > 14/01/15 13:09:15 INFO metrics.RpcMetrics: Initializing RPC Metrics > with > > > > hostName=JobTracker, port=8021 > > > > > > > > > > > > On Tue, Jan 14, 2014 at 9:29 PM, HUO Jing <huoj...@ihep.ac.cn> > wrote: > > > > > > > > > When I try this command"MESOS_NATIVE_LIBRARY=/usr/local/lib/ > > > > > libmesos-0.14.0.so hadoop jobtracker" > > > > > There are some errors: > > > > > 14/01/15 13:09:14 INFO mapred.JobTracker: STARTUP_MSG: > > > > > /************************************************************ > > > > > STARTUP_MSG: Starting JobTracker > > > > > STARTUP_MSG: host = hadoop06.ihep.ac.cn/192.168.60.31 > > > > > STARTUP_MSG: args = [] > > > > > STARTUP_MSG: version = 0.20.2-cdh3u5 > > > > > STARTUP_MSG: build = git:// > > > > > > hadoop03.ihep.ac.cn/publicfs/cc/zangds/dmdp/hadoop-0.20.2-cdh3u5-hce-r ; > > > > > compiled by 'zangds' on Sun Mar 24 23:36:42 CST 2013 > > > > > ************************************************************/ > > > > > 14/01/15 13:09:15 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Updating the current master key for generating delegation tokens > > > > > 14/01/15 13:09:15 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Starting expired delegation token remover thread, > > > > > tokenRemoverScanInterval=60 min(s) > > > > > 14/01/15 13:09:15 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Updating the current master key for generating delegation tokens > > > > > 14/01/15 13:09:15 INFO mapred.JobTracker: Scheduler configured with > > > > > (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, > > > limitMaxMemForMapTasks, > > > > > limitMaxMemForReduceTasks) (-1, -1, -1, -1) > > > > > 14/01/15 13:09:15 INFO util.HostsFileReader: Refreshing hosts > > > > > (include/exclude) list > > > > > 14/01/15 13:09:15 INFO mapred.JobTracker: Starting jobtracker with > > > owner > > > > > as mapred > > > > > 14/01/15 13:09:15 INFO ipc.Server: Starting Socket Reader #1 for > port > > > 8021 > > > > > 14/01/15 13:09:15 INFO metrics.RpcMetrics: Initializing RPC Metrics > > > with > > > > > hostName=JobTracker, port=8021 > > > > > 14/01/15 13:09:15 INFO metrics.RpcDetailedMetrics: Initializing RPC > > > > > Metrics with hostName=JobTracker, port=8021 > > > > > 14/01/15 13:09:15 INFO mortbay.log: Logging to > > > > > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > > > > > org.mortbay.log.Slf4jLog > > > > > 14/01/15 13:09:15 INFO http.HttpServer: Added global filtersafety > > > > > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > > > > > 14/01/15 13:09:15 INFO util.NativeCodeLoader: Loaded the > native-hadoop > > > > > library > > > > > 14/01/15 13:09:15 ERROR security.UserGroupInformation: > > > > > PriviledgedActionException as:mapred (auth:SIMPLE) cause:ENOENT: No > > > such > > > > > file or directory > > > > > 14/01/15 13:09:15 WARN mapred.JobTracker: Error starting tracker: > > > ENOENT: > > > > > No such file or directory > > > > > at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native > Method) > > > > > at > > > > > > > > > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:521) > > > > > at > > > > > > > > > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) > > > > > at > > > > > > org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:195) > > > > > at > > > org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:491) > > > > > at > > > org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1852) > > > > > at > > > org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1849) > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > at javax.security.auth.Subject.doAs(Subject.java:396) > > > > > at > > > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1849) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724) > > > > > at > > > > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297) > > > > > at > > > > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499) > > > > > > > > > > 14/01/15 13:09:16 INFO security.UserGroupInformation: JAAS > > > Configuration > > > > > already set up for Hadoop, not re-installing. > > > > > 14/01/15 13:09:16 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Updating the current master key for generating delegation tokens > > > > > 14/01/15 13:09:16 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Starting expired delegation token remover thread, > > > > > tokenRemoverScanInterval=60 min(s) > > > > > 14/01/15 13:09:16 INFO > delegation.AbstractDelegationTokenSecretManager: > > > > > Updating the current master key for generating delegation tokens > > > > > 14/01/15 13:09:16 INFO mapred.JobTracker: Scheduler configured with > > > > > (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, > > > limitMaxMemForMapTasks, > > > > > limitMaxMemForReduceTasks) (-1, -1, -1, -1) > > > > > 14/01/15 13:09:16 INFO util.HostsFileReader: Refreshing hosts > > > > > (include/exclude) list > > > > > 14/01/15 13:09:16 INFO mapred.JobTracker: Starting jobtracker with > > > owner > > > > > as mapred > > > > > 14/01/15 13:09:16 FATAL mapred.JobTracker: java.net.BindException: > > > Problem > > > > > binding to hadoop06.ihep.ac.cn/192.168.60.31:8021 : Address > already > > > in use > > > > > at org.apache.hadoop.ipc.Server.bind(Server.java:231) > > > > > at > > > org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:320) > > > > > at org.apache.hadoop.ipc.Server.<init>(Server.java:1534) > > > > > at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:539) > > > > > at org.apache.hadoop.ipc.RPC.getServer(RPC.java:500) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1817) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724) > > > > > at > > > > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297) > > > > > at > > > > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289) > > > > > at > > > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499) > > > > > Caused by: java.net.BindException: Address already in use > > > > > at sun.nio.ch.Net.bind(Native Method) > > > > > at > > > > > > > > > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) > > > > > at > > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > > > > > at org.apache.hadoop.ipc.Server.bind(Server.java:229) > > > > > ... 9 more > > > > > > > > > > 14/01/15 13:09:16 INFO mapred.JobTracker: SHUTDOWN_MSG: > > > > > /************************************************************ > > > > > SHUTDOWN_MSG: Shutting down JobTracker at > > > > > hadoop06.ihep.ac.cn/192.168.60.31 > > > > > ************************************************************/ > > > > > > > > > > please tell me what's wrong. > > > > > > > > > > My hadoop version is CDH4.5.0, installed from yum. > > > > > I put the hadoop-mesos-0.0.5.jar in > > > /usr/lib/hadoop-0.20-mapreduce/lib/ , > > > > > and make a tar package. > > > > > put the package hadoop-0.20-mapreduce.tar.gz to hdfs. > > > > > and then, change the configuration: > > > > > <property> > > > > > <name>mapred.job.tracker</name> > > > > > <value>localhost:9001</value> > > > > > </property> > > > > > <property> > > > > > <name>mapred.jobtracker.taskScheduler</name> > > > > > <value>org.apache.hadoop.mapred.MesosScheduler</value> > > > > > </property> > > > > > <property> > > > > > <name>mapred.mesos.taskScheduler</name> > > > > > <value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value> > > > > > </property> > > > > > <property> > > > > > <name>mapred.mesos.master</name> > > > > > <value>localhost:5050</value> > > > > > </property> > > > > > <property> > > > > > <name>mapred.mesos.executor.uri</name> > > > > > > <value>hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz</value> > > > > > </property> > > > > > > > > > > But this is not work, please help me! > > > > > > > > > > > -----原始邮件----- > > > > > > 发件人: "Adam Bordelon" <a...@mesosphere.io> > > > > > > 发送时间: 2014年1月15日 星期三 > > > > > > 收件人: dev@mesos.apache.org > > > > > > 抄送: mesos-dev <mesos-...@incubator.apache.org> > > > > > > 主题: Re: How to run hadoop Jobtracker > > > > > > > > > > > > Try running > > > "MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos-0.14.0.sohadoop > > > > > > jobtracker" > > > > > > The primary executable to run is the 'hadoop' executable, but it > > > needs to > > > > > > know where to find MESOS_NATIVE_LIBRARY, so we set that > environment > > > > > > variable on the command-line first. You could set it in other > ways > > > > > instead > > > > > > (in that user's .bashrc or by creating a wrapper around 'hadoop' > that > > > > > sets > > > > > > the variable before launching 'hadoop'). > > > > > > You are very close to having Hadoop running on top of Mesos. > > > > > > Good luck! > > > > > > -Adam- > > > > > > > > > > > > > > > > > > On Tue, Jan 14, 2014 at 6:47 AM, HUO Jing <huoj...@ihep.ac.cn> > > > wrote: > > > > > > > > > > > > > Hi, > > > > > > > I have installed Mesos and Hadoop CDH4.5.0, changed the > > > > > mapred-site.xml, > > > > > > > and packaged hadoop-mesos-0.0.5.jar with hadoop and upload it > to > > > hdfs. > > > > > In a > > > > > > > word, I have done everything in this page: > > > > > https://github.com/mesos/hadoop > > > > > > > . > > > > > > > but when I try to run jobtracker with command: > > > > > > > bash-3.2$ /usr/local/lib/libmesos-0.14.0.so hadoop jobtracker > > > > > > > It says:Segmentation fault > > > > > > > please tell me how to deal with this. > > > > > > > > > > > > > > > > > > > > > Huojing > > > > > > > > > > > > > > > > > > > > > > > > >