Disk size must be ok

daniel@daniel-linux:~/Bureau/apache-nutch-1.4-bin/runtime/local/logs$ df -h
Sys. de fichiers            Taille  Uti. Disp. Uti% Monté sur
/dev/sda5              73G   65G  3,8G  95% /

access rights : as I'm testing on my laptop, all files in apache-nutch-1.4-bin/ are 777

Hum... Time for tea, to try to understand ;)


On 23/02/2012 11:47, remi tassing wrote:
disk size issue?
access rights?

On Thu, Feb 23, 2012 at 12:39 PM, Daniel Bourrion<
[email protected]>  wrote:

Hi Markus
Thx for help.

(Hope i'm not boring everybody)

I've erase everything in crawl/

Launching my nutch, got now

-----
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.

Exception in thread "main" java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.**JobClient.runJob(JobClient.**java:1252)
    at org.apache.nutch.crawl.**CrawlDb.update(CrawlDb.java:**105)
    at org.apache.nutch.crawl.**CrawlDb.update(CrawlDb.java:**63)
    at org.apache.nutch.crawl.Crawl.**run(Crawl.java:140)

    at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
    at org.apache.nutch.crawl.Crawl.**main(Crawl.java:55)

-----


Into the logs, got

____


2012-02-23 11:25:48,803 INFO  crawl.CrawlDb - CrawlDb update: 404 purging:
false
2012-02-23 11:25:48,804 INFO  crawl.CrawlDb - CrawlDb update: Merging
segment data into db.
2012-02-23 11:25:49,353 INFO  regex.RegexURLNormalizer - can't find rules
for scope 'crawldb', using default
2012-02-23 11:25:49,560 INFO  regex.RegexURLNormalizer - can't find rules
for scope 'crawldb', using default
2012-02-23 11:25:49,985 WARN  mapred.LocalJobRunner - job_local_0007
java.io.IOException: Cannot run program "chmod": java.io.IOException:
error=12, Cannot allocate memory
    at java.lang.ProcessBuilder.**start(ProcessBuilder.java:475)
    at org.apache.hadoop.util.Shell.**runCommand(Shell.java:149)
    at org.apache.hadoop.util.Shell.**run(Shell.java:134)
    at org.apache.hadoop.util.Shell$**ShellCommandExecutor.execute(**
Shell.java:286)
    at org.apache.hadoop.util.Shell.**execCommand(Shell.java:354)
    at org.apache.hadoop.util.Shell.**execCommand(Shell.java:337)
    at org.apache.hadoop.fs.**RawLocalFileSystem.**execCommand(**
RawLocalFileSystem.java:481)
    at org.apache.hadoop.fs.**RawLocalFileSystem.**setPermission(**
RawLocalFileSystem.java:473)
    at org.apache.hadoop.fs.**FilterFileSystem.**setPermission(**
FilterFileSystem.java:280)
    at org.apache.hadoop.fs.**ChecksumFileSystem.create(**
ChecksumFileSystem.java:372)
    at org.apache.hadoop.fs.**FileSystem.create(FileSystem.**java:484)
    at org.apache.hadoop.fs.**FileSystem.create(FileSystem.**java:465)
    at org.apache.hadoop.fs.**FileSystem.create(FileSystem.**java:372)
    at org.apache.hadoop.fs.**FileSystem.create(FileSystem.**java:364)
    at org.apache.hadoop.mapred.**MapTask.localizeConfiguration(**
MapTask.java:111)
    at org.apache.hadoop.mapred.**LocalJobRunner$Job.run(**
LocalJobRunner.java:173)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot
allocate memory
    at java.lang.UNIXProcess.<init>(**UNIXProcess.java:164)
    at java.lang.ProcessImpl.start(**ProcessImpl.java:81)
    at java.lang.ProcessBuilder.**start(ProcessBuilder.java:468)
    ... 15 more
_____



--
Avec mes salutations les plus cordiales.
__

Daniel Bourrion, conservateur des bibliothèques
Responsable de la bibliothèque numérique
Ligne directe : 02.44.68.80.50
SCD Université d'Angers - http://bu.univ-angers.fr
Bu Saint Serge - 57 Quai Félix Faure - 49100 Angers cedex

***********************************
" Et par le pouvoir d'un mot
Je recommence ma vie "
                       Paul Eluard
***********************************
blog perso : http://archives.face-ecran.fr/

Reply via email to