Yes, you can install cygwin, but i recommend you just install an
virtual machine run Linux.


2011/1/24 Markus Jelsma <[email protected]>:
> I believe this is because you're running Hadoop on Windows. I don't know a
> thing about Windows but Hadoop uses some shell commands to operate. I suggest
> you try to find some resources on Hadoop running on Windows.
>
>> Hi, I am running windows 7, JDK 1.6 and Nutch 1.2. When running a crawl
>> I get the following exception from the log:
>>
>>
>>
>> CrawlDb update: starting at 2011-01-20 15:59:58
>>
>> CrawlDb update: db: crawl/crawldb
>>
>> CrawlDb update: segments: [crawl/segments/20110120155841]
>>
>> CrawlDb update: additions allowed: true
>>
>> CrawlDb update: URL normalizing: true
>>
>> CrawlDb update: URL filtering: true
>>
>> CrawlDb update: Merging segment data into db.
>>
>> Exception in thread "main"
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>
>>                 at
>> org.apache.hadoop.util.Shell.runCommand(Shell.java:195)
>>
>>                 at org.apache.hadoop.util.Shell.run(Shell.java:134)
>>
>>                 at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286
>> )
>>
>>                 at
>> org.apache.hadoop.util.Shell.execCommand(Shell.java:354)
>>
>>                 at
>> org.apache.hadoop.util.Shell.execCommand(Shell.java:337)
>>
>>                 at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.j
>> ava:481)
>>
>>                 at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem
>> .java:473)
>>
>>                 at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.jav
>> a:280)
>>
>>                 at
>> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:3
>> 72)
>>
>>                 at
>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484)
>>
>>                 at
>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465)
>>
>>                 at
>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372)
>>
>>                 at
>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:364)
>>
>>                 at
>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:243)
>>
>>                 at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:787)
>>
>>                 at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
>>
>>                 at
>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
>>
>>                 at
>> org.apache.nutch.crawl.CrawlDb.update(CrawlDb.java:98)
>>
>>                 at
>> org.apache.nutch.crawl.CrawlDb.update(CrawlDb.java:61)
>>
>>                 at org.apache.nutch.crawl.Crawl.main(Crawl.java:137)
>>
>>
>>
>> Anyone come across this error?
>>
>>
>>
>> Thanks
>>
>> Michael
>

Reply via email to