Hi Sujan,

did you also place the hadoop.dll on a place where it is found?
Sorry, but I have no experience what the correct place would be
(I'm only on Linux and even no Windows available for testing).
Please, check, e.g. [1].

Ev. runtime/local/lib/native/name_of_your_platform/,
see lib/native/README.txt and also how the platform name
is determined in bin/nutch

I hope you'll find a solution. Would you mind to add your findings to [2]?

We would really appreciate if someone could update this page - it's
horribly outdated!  If yes, just ask for write access to the Nutch wiki
on this list.

Thanks,
Sebastian

[1]
http://stackoverflow.com/questions/30964216/hadoop-on-windows-yarn-fails-to-start-with-java-lang-unsatisfiedlinkerror

[2] https://wiki.apache.org/nutch/GettingNutchRunningWithWindows


On 08/08/2016 12:01 PM, Sujan Suppala wrote:
> Hi Sebastian,
> 
> I am launching Nutch in local mode. I compiled nutch 1.12 source and 
> executing the inject command from the local directory in 
> Cygwin(/cygdrive/c/dev/apache-nutch-1.12/runtime/local).
> 
> I tried by copying all the  Hadoop jars from 
> hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/**/*.jar to 
> runtime/local/lib but seeing the same exceptions with/without HADOOP_HOME . 
> 
> Please suggest.
> 
> Thanks
> Sujan 
> 
> -----Original Message-----
> From: Sebastian Nagel [mailto:[email protected]] 
> Sent: Monday, August 08, 2016 2:28 PM
> To: [email protected]
> Subject: Re: nutch 1.12 + windows : UnsatisfiedLinkError exception while 
> running inject command
> 
> Hi Sujan,
> 
> how are you launching Nutch?
> In local or (pseudo)-distributed mode?
> 
> HADOOP_HOME is only used when running in (pseudo)-distributed mode, see 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_nutch_NutchHadoopSingleNodeTutorial&d=DQICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=TYOvRwySdGnkd8fWW9UKQD84hpS9B0oyD81yyeqf8dE&m=69wJlnUbgQ_WzKC6FoG6hR4e7Hf7_lhn8EG8Tl6XqvI&s=LTQWpRGYa3SK6J7q1ABjA9oyZiUGFyfxqP6o8kUU1_k&e=
>  
> 
> For local mode, you have to replace all Hadoop jars and their dependencies.
> Could be done by copying over into runtime/local/lib/ but it's better (and 
> much easier over time if recompiling) to change the dependency in ivy/ivy.xml
> 
> Best,
> Sebastian
> 
> On 08/08/2016 10:49 AM, Sujan Suppala wrote:
>> Hi,
>>
>>                 I am seeing the below exception when I run the Inject 
>> command. I have installed nutch 1.12 by following the wiki 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_nutch_NutchTutorial&d=DQICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=TYOvRwySdGnkd8fWW9UKQD84hpS9B0oyD81yyeqf8dE&m=69wJlnUbgQ_WzKC6FoG6hR4e7Hf7_lhn8EG8Tl6XqvI&s=R3_tJkQPOzdNTd3NCbuz4qtBa_m1Fvd9tobQA5bQYGw&e=
>>    on windows 7 and the java_home is set to 64-bit jdk1.7 .  Using cygwin64 
>> to run the inject command.
>>
>> $ bin/nutch inject TestCrawl/crawldb urls
>> Injector: starting at 2016-08-08 13:51:43
>> Injector: crawlDb: TestCrawl/crawldb
>> Injector: urlDir: urls
>> Injector: Converting injected urls to crawl db entries.
>> Injector: java.lang.NullPointerException
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>         at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
>>         at 
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
>>         at 
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>>         at 
>> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>>         at 
>> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
>>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>>         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>>         at 
>> org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>>         at org.apache.nutch.util.LockUtil.createLockFile(LockUtil.java:58)
>>         at org.apache.nutch.crawl.Injector.inject(Injector.java:357)
>>         at org.apache.nutch.crawl.Injector.run(Injector.java:467)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>         at org.apache.nutch.crawl.Injector.main(Injector.java:441)
>>
>> and the Hadoop.log has the below exception:
>> java.io.IOException: Could not locate executable null\bin\winutils.exe in 
>> the Hadoop binaries.
>>                 at 
>> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
>>                 at 
>> org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
>>                 at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
>>                 at 
>> org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:432)
>>                 at 
>> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:478)
>>                 at 
>> org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
>>                 at 
>> org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
>>                 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
>>                 at 
>> org.apache.nutch.crawl.Injector.main(Injector.java:441)
>>
>> so, I have downloaded the hadoop-2.4.0-src and followed the 
>> BUILDING.txt to build Hadoop so that winutils.exe and other dlls  are 
>> created. (built by setting environment variable: Platform=x64) I have set 
>> the following environment variable to point to the winutils.exe $ export 
>> HADOOP_HOME='C:\Dev\hadoop-2.4.0-src\hadoop-dist\target\hadoop-2.4.0'
>>
>> After this, I have run the inject command  and I am seeing the bellow other 
>> exception in the console:
>>
>> $ bin/nutch inject TestCrawl/crawldb urls
>> Injector: starting at 2016-08-08 13:53:25
>> Injector: crawlDb: TestCrawl/crawldb
>> Injector: urlDir: urls
>> Injector: Converting injected urls to crawl db entries.
>> Exception in thread "main" java.lang.UnsatisfiedLinkError: 
>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
>>         at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native 
>> Method)
>>         at 
>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
>>         at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
>>         at 
>> org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:173)
>>         at 
>> org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:160)
>>         at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:94)
>>         at 
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
>>         at 
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
>>         at 
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
>>         at 
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
>>         at 
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
>>         at 
>> org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
>>         at 
>> org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
>>         at 
>> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
>>         at 
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
>>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
>>         at org.apache.nutch.crawl.Injector.inject(Injector.java:376)
>>         at org.apache.nutch.crawl.Injector.run(Injector.java:467)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>         at org.apache.nutch.crawl.Injector.main(Injector.java:441)
>>
>> Hadoop.log has the below entries:
>> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: starting at 
>> 2016-08-08 13:53:25
>> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: crawlDb: 
>> TestCrawl/crawldb
>> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: urlDir: urls
>> 2016-08-08 13:53:25,568 INFO  crawl.Injector - Injector: Converting injected 
>> urls to crawl db entries.
>> 2016-08-08 13:53:25,708 WARN  util.NativeCodeLoader - Unable to load 
>> native-hadoop library for your platform... using builtin-java classes 
>> where applicable
>> 2016-08-08 13:53:27,317 WARN  conf.Configuration - 
>> file:/tmp/hadoop-ssuppala/mapred/staging/ssuppala1678868012/.staging/job_local1678868012_0001/job.xml:an
>>  attempt to override final parameter: 
>> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> 2016-08-08 13:53:27,324 WARN  conf.Configuration - 
>> file:/tmp/hadoop-ssuppala/mapred/staging/ssuppala1678868012/.staging/job_local1678868012_0001/job.xml:an
>>  attempt to override final parameter: 
>> mapreduce.job.end-notification.max.attempts;  Ignoring.
>>
>>
>> How to resolve this issue?
>>
>> Thanks
>> Sujan
>>
> 

Reply via email to