Hi Sujan,

how are you launching Nutch?
In local or (pseudo)-distributed mode?

HADOOP_HOME is only used when running in (pseudo)-distributed mode,
see https://wiki.apache.org/nutch/NutchHadoopSingleNodeTutorial

For local mode, you have to replace all Hadoop jars and their dependencies.
Could be done by copying over into runtime/local/lib/
but it's better (and much easier over time if recompiling) to change
the dependency in ivy/ivy.xml

Best,
Sebastian

On 08/08/2016 10:49 AM, Sujan Suppala wrote:
> Hi,
> 
>                 I am seeing the below exception when I run the Inject 
> command. I have installed nutch 1.12 by following the wiki 
> http://wiki.apache.org/nutch/NutchTutorial  on windows 7 and the java_home is 
> set to 64-bit jdk1.7 .  Using cygwin64 to run the inject command.
> 
> $ bin/nutch inject TestCrawl/crawldb urls
> Injector: starting at 2016-08-08 13:51:43
> Injector: crawlDb: TestCrawl/crawldb
> Injector: urlDir: urls
> Injector: Converting injected urls to crawl db entries.
> Injector: java.lang.NullPointerException
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
>         at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
>         at 
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>         at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>         at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>         at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>         at org.apache.nutch.util.LockUtil.createLockFile(LockUtil.java:58)
>         at org.apache.nutch.crawl.Injector.inject(Injector.java:357)
>         at org.apache.nutch.crawl.Injector.run(Injector.java:467)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.nutch.crawl.Injector.main(Injector.java:441)
> 
> and the Hadoop.log has the below exception:
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
> Hadoop binaries.
>                 at 
> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
>                 at 
> org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
>                 at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
>                 at 
> org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:432)
>                 at 
> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:478)
>                 at 
> org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
>                 at 
> org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
>                 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
>                 at org.apache.nutch.crawl.Injector.main(Injector.java:441)
> 
> so, I have downloaded the hadoop-2.4.0-src and followed the BUILDING.txt to 
> build Hadoop so that winutils.exe and other dlls  are created. (built by 
> setting environment variable: Platform=x64)
> I have set the following environment variable to point to the winutils.exe
> $ export HADOOP_HOME='C:\Dev\hadoop-2.4.0-src\hadoop-dist\target\hadoop-2.4.0'
> 
> After this, I have run the inject command  and I am seeing the bellow other 
> exception in the console:
> 
> $ bin/nutch inject TestCrawl/crawldb urls
> Injector: starting at 2016-08-08 13:53:25
> Injector: crawlDb: TestCrawl/crawldb
> Injector: urlDir: urls
> Injector: Converting injected urls to crawl db entries.
> Exception in thread "main" java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
>         at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native 
> Method)
>         at 
> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
>         at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
>         at 
> org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:173)
>         at 
> org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:160)
>         at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:94)
>         at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
>         at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
>         at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
>         at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
>         at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
>         at 
> org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
>         at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
>         at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
>         at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
>         at org.apache.nutch.crawl.Injector.inject(Injector.java:376)
>         at org.apache.nutch.crawl.Injector.run(Injector.java:467)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.nutch.crawl.Injector.main(Injector.java:441)
> 
> Hadoop.log has the below entries:
> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: starting at 
> 2016-08-08 13:53:25
> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: crawlDb: 
> TestCrawl/crawldb
> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: urlDir: urls
> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: Converting injected 
> urls to crawl db entries.
> 2016-08-08 13:53:25,708 WARN  util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2016-08-08 13:53:27,317 WARN  conf.Configuration - 
> file:/tmp/hadoop-ssuppala/mapred/staging/ssuppala1678868012/.staging/job_local1678868012_0001/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-08-08 13:53:27,324 WARN  conf.Configuration - 
> file:/tmp/hadoop-ssuppala/mapred/staging/ssuppala1678868012/.staging/job_local1678868012_0001/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 
> 
> How to resolve this issue?
> 
> Thanks
> Sujan
> 

Reply via email to