Hi everyone,
Does anyone has any other solution that I can try ?
I am stuck at this step and not able to move forward.



On Tue, Nov 20, 2012 at 2:36 PM, Prashant Ladha <[email protected]>wrote:

> Hi Tejas / Lewis,
> I tried the solution mentioned in the below link but nothing seems to be
> working.
> http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/25837
>
> Below are the results for the tryouts. Can you assist with any other link
> that have more solutions that I can try out.
> The discussions essentially boils down to 3 possible solutions
>
> 1. hadoop dfs -chmod 777 /tmp -  I do not have a Hadoop executable. I have
> simply checked-out the Nutch code via Eclipse.
> 2. include cygwin directory in PATH. - Done this but still doesnt help.
> 3. revert to the pevious stable version of Hadoop. - I tried that but I am
> not able to go back to 0.20.2 version.
>  I was trying to modify the ivy.xml and change the "rev" property in it.
>
>
> <dependency org="org.apache.hadoop" name="hadoop-core" rev="1.0.3"
> conf="*->default">
>  <exclude org="hsqldb" name="hsqldb" />
> <exclude org="net.sf.kosmosfs" name="kfs" />
>  <exclude org="net.java.dev.jets3t" name="jets3t" />
> <exclude org="org.eclipse.jdt" name="core" />
>  <exclude org="org.mortbay.jetty" name="jsp-*" />
> <exclude org="ant" name="ant" />
>  </dependency>
>
>
>
>
> On Wed, Nov 14, 2012 at 4:21 PM, Tejas Patil <[email protected]>wrote:
>
>> Hi Prashant,
>> Read this thread:
>> http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/25837
>>
>> Thanks,
>> Tejas
>>
>>
>> On Wed, Nov 14, 2012 at 10:14 AM, Lewis John Mcgibbney <
>> [email protected]> wrote:
>>
>> > Hi Prashant,
>> >
>> > Please take a look on either the Nutch or the Hadoop user@ lists. I've
>> > seen and reported on this previously so it should not be too hard to
>> > find.
>> >
>> > hth
>> >
>> > Lewis
>> >
>> > On Wed, Nov 14, 2012 at 6:07 PM, Prashant Ladha
>> > <[email protected]> wrote:
>> > > Hi,
>> > > I am trying to setup Nutch via Eclipse.
>> > > I followed the instructions from the below link.
>> > > http://wiki.apache.org/nutch/RunNutchInEclipse
>> > >
>> > > But when running the Crawl class, it is throwing the below exception.
>> > > I tried the solution to add cygwin in your PATH mentioned on the below
>> > link
>> > > but that did not help.
>> > > http://florianhartl.com/nutch-installation.html
>> > >
>> > > I am using Windows7 laptop.
>> > >
>> > > Can you please help me resolving this issue?
>> > >
>> > >
>> > > solrUrl is not set, indexing will be skipped...
>> > > crawl started in: crawl
>> > > rootUrlDir = urls
>> > > threads = 10
>> > > depth = 3
>> > > solrUrl=null
>> > > topN = 50
>> > > Injector: starting at 2012-11-14 23:34:18
>> > > Injector: crawlDb: crawl/crawldb
>> > > Injector: urlDir: urls
>> > > Injector: Converting injected urls to crawl db entries.
>> > > Exception in thread "main" java.io.IOException: Failed to set
>> permissions
>> > > of path:
>> \tmp\hadoop-XXXXXX\mapred\staging\XXXXXXXXX916119234\.staging to
>> > > 0700
>> > > at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
>> > > at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
>> > > at
>> > >
>> >
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
>> > > at
>> > >
>> >
>> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
>> > > at
>> > org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
>> > > at
>> > >
>> >
>> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
>> > > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:856)
>> > > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
>> > > at java.security.AccessController.doPrivileged(Native Method)
>> > > at javax.security.auth.Subject.doAs(Subject.java:415)
>> > > at
>> > >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> > > at
>> > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
>> > > at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
>> > > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
>> > > at org.apache.nutch.crawl.Injector.inject(Injector.java:278)
>> > > at org.apache.nutch.crawl.Crawl.run(Crawl.java:127)
>> > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> > > at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
>> >
>> >
>> >
>> > --
>> > Lewis
>> >
>>
>
>

Reply via email to