Ravi,

~ in ~/Desktop/input/doc isn't resolvable by the code AFAIK. A shell
usually resolves that, and you seem to be running it from Eclipse
(which hence, wouldn't resolve it). So rather provide an absolute path
as input arguments.

On Thu, May 17, 2012 at 8:37 PM, Ravi Joshi <ravi.josh...@yahoo.com> wrote:
> Now i added all the jar files which came up with hadoop-1.0.1.tar.gz package. 
> But some new errors are showing. This time i am following wordCount v2 
> (http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Example%3a+WordCount+v2.0).
>  Following is the error.
>
> 12/05/17 20:31:43 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 12/05/17 20:31:43 WARN mapred.JobClient: No job jar file set.  User classes 
> may not be found. See JobConf(Class) or JobConf#setJar(String).
> 12/05/17 20:31:43 INFO mapred.JobClient: Cleaning up the staging area 
> file:/tmp/hadoop-hduser/mapred/staging/hduser-481041798/.staging/job_local_0001
> 12/05/17 20:31:43 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:hduser 
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not 
> exist: file:/home/hduser/Desktop/Eclipse_Workspace/K-Means 
> Algorithm/~/Desktop/input/doc
> Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: 
> Input path does not exist: 
> file:/home/hduser/Desktop/Eclipse_Workspace/K-Means 
> Algorithm/~/Desktop/input/doc
>     at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
>     at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
>     at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
>     at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
>     at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:416)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>     at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
>     at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
>     at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
>     at test.WordCount.run(WordCount.java:131)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>     at test.WordCount.main(WordCount.java:136)
>
>
> -Ravi Joshi
>
> --- On Thu, 17/5/12, Ravi Joshi <ravi.josh...@yahoo.com> wrote:
>
> From: Ravi Joshi <ravi.josh...@yahoo.com>
> Subject: Re: Unable to work with Hadoop 1.0.1 using eclipse-indigo
> To: common-user@hadoop.apache.org
> Received: Thursday, 17 May, 2012, 2:16 PM
>
> Hi, i added hadoop-core-1.0.1.jar in the project class path. i am testing 
> wordcount 
> (http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0)
>  but when i try to run my WordCount.java in eclipse, it shows the following 
> errors-
>
>
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>     at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:143)
>     at test.WordCount.main(WordCount.java:56)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.logging.LogFactory
>     at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
>     ... 2 more
>
> -Ravi Joshi
>
> --- On Thu, 17/5/12, Ravi Joshi <ravi.josh...@yahoo.com> wrote:
>
> From: Ravi Joshi <ravi.josh...@yahoo.com>
> Subject: Re: Unable to work with Hadoop 1.0.1 using eclipse-indigo
> To: common-user@hadoop.apache.org
> Received: Thursday, 17 May, 2012, 1:37 PM
>
> Hi Jagat, Thank you so much for answering the question. Can you please tell 
> me all the names with location of jar files, which must be added in the 
> project? I am using hadoop-1.0.1 in Eclipse Indigo on Ubuntu10.04 LTS.
> Thank you.
>
> -Ravi Joshi
>
> --- On Thu, 17/5/12, Jagat <jagatsi...@gmail.com> wrote:
>
> From: Jagat <jagatsi...@gmail.com>
> Subject: Re: Unable to work with Hadoop 1.0.1 using eclipse-indigo
> To: common-user@hadoop.apache.org
> Received: Thursday, 17 May, 2012, 1:32 PM
>
> Hello Ravi
>
> To create map reduce programs plugin is not mandatory.
>
> Just download Hadoop
> Create one Java project in Eclipse
> Add jar files from Home folder of Hadoop ( from share folder in Hadoop 2.x
> ) to project class path
> Create new mapper class and Reducer class , Driver class
> Run it
>
>
>
>
>
> On Thu, May 17, 2012 at 6:48 PM, Ravi Joshi <ravi.josh...@yahoo.com> wrote:
>
>> Hi, i recently downloaded and successfully installed hadoop-1.0.1 in my
>> ubuntu 10.04 LTS. I have hadoop-1.0.1.tar.gz downloaded and now i want
>> to design map-reduce application. As suggested by some blogs, first we
>> should install eclipse plugin for hadoop, which is located inside
>> contrib->eclipse plugin but in my hadoop-1.0.1.tar.gz, inside contrib
>>  directory no eclipse plugin is found. Inside contrib directory only
>> datajoin, failmon, gridmix, hdfsproxy, hod, index, streaming and vaidya
>> directories are present. when i looked over src->contrib, i can find
>> eclipse plugin directory but no jar file.
>> I haven't work with hadoop
>> under eclipse before, can somebody please explain me the plugin
>> installation, so that i can start map-reduce development.
>> Thanking you.
>>
>> -Ravi Joshi



-- 
Harsh J

Reply via email to