If there job is stuck in a Pending state that is usually an issue with the
configuration of your Hadoop cluster e.g. insufficient resources

Those kinds of questions are best addressed to the Hadoop community or
your Hadoop distribution vendor if you are using a non-Apache distribution

Rob

On 20/02/2015 14:07, "Carmen Manzulli" <[email protected]> wrote:

>ok...now it seems to be able to start to execute the job but it rest in
>pending status for a lot..what kind of problem could be ?
>
>2015-02-20 11:29 GMT+01:00 Carmen Manzulli <[email protected]>:
>
>> Thanks a lot....I was thinking about a Hadoop configuration
>>problem..sorry
>> for this question..
>>
>> 2015-02-19 19:01 GMT+01:00 Rob Vesse <[email protected]>:
>>
>>> --nodecount is not a valid argument so it is being treated as a file
>>>name
>>>
>>> The correct argument is --node-count
>>>
>>> Rob
>>>
>>> On 19/02/2015 16:12, "Carmen Manzulli" <[email protected]>
>>>wrote:
>>>
>>> >Hi,
>>> >i don't know why but i'm getting this error:
>>> >
>>> >
>>> >Input path does not exist:
>>>hdfs://master:54310/user/sisinflab/--nodecount
>>> >org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
>>>path
>>> >does not exist: hdfs://master:54310/user/sisinflab/--nodecount
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedLis
>>>>tSt
>>> >atus(FileInputFormat.java:321)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileIn
>>>>put
>>> >Format.java:264)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInp
>>>>utF
>>> >ormat.java:385)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.ja
>>>>va:
>>> >597)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:
>>>>614
>>> >)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter
>>>>.ja
>>> >va:492)
>>> >    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
>>> >    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
>>> >    at java.security.AccessController.doPrivileged(Native Method)
>>> >    at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >    at
>>>
>>> 
>>>>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati
>>>>on.
>>> >java:1628)
>>> >    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
>>> >    at
>>> org.apache.jena.hadoop.rdf.stats.RdfStats.runJob(RdfStats.java:297)
>>> >    at
>>>
>>> 
>>>>org.apache.jena.hadoop.rdf.stats.RdfStats.computeStatistic(RdfStats.jav
>>>>a:2
>>> >78)
>>> >    at 
>>>org.apache.jena.hadoop.rdf.stats.RdfStats.run(RdfStats.java:264)
>>> >    at 
>>>org.apache.jena.hadoop.rdf.stats.RdfStats.run(RdfStats.java:180)
>>> >    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>> >    at 
>>>org.apache.jena.hadoop.rdf.stats.RdfStats.main(RdfStats.java:136)
>>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >    at
>>>
>>> 
>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
>>>>va:
>>> >57)
>>> >    at
>>>
>>> 
>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
>>>>rIm
>>> >pl.java:43)
>>> >    at java.lang.reflect.Method.invoke(Method.java:606)
>>> >    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>> >    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>> >
>>> >
>>> > when i try to run the command:
>>> >
>>> >hadoop jar
>>>
>>> 
>>>>/home/sisinflab/NetBeansProjects/prova/target/prova-1.0-SNAPSHOT-jar-wi
>>>>th-
>>> >dependencies.jar
>>> >org.apache.jena.hadoop.rdf.stats.RdfStats --nodecount --data-types
>>> >--namespaces --output hdfs:///prove/output --input-type triples
>>> >hdfs:///input/persondata_en.ttl
>>> >
>>> >
>>> >could anyone help me?
>>> >thanks in advance
>>>
>>>
>>>
>>>
>>>
>>




Reply via email to