My settings:
mapred.local.dir
/hadoop/mapred/local
The local directory where MapReduce stores intermediate
data files. May be a comma-separated list of
directories on different devices in order to spread disk i/o.
mapred.system.dir
/hadoop/mapred/system
The shared direc
[ http://issues.apache.org/jira/browse/NUTCH-336?page=all ]
Chris Schneider updated NUTCH-336:
--
Attachment: NUTCH-336.patch.txt
Here's a patch that fixes the problem. It separates a new injectionScore API
out from the initialScore API.
> Harvested lin
most propably you have run out of space in tmp (local) filesystem
use properties like
mapred.system.dir
mapred.local.dir
in hadoop-site.xml to get over this problem.
[EMAIL PROTECTED] wrote:
I forget ;-) One more question:
This problem with nutch or hadoop?
-Original Mess
hi.
i have a possible project where i'm looking at extracting information from
various public/college websites. i don't need to index the text/content of
the sites. i do need to extract specific information.
as an example, a site might have a course schedule page, which in turn has
links to the d
[ http://issues.apache.org/jira/browse/NUTCH-266?page=all ]
Renaud Richardet updated NUTCH-266:
---
Attachment: patch.diff
Thank you Sami,
We had a similar problem with Win XP and were able to fix it by using
hadoop-nightly.jar. However, because of some
I forget ;-) One more question:
This problem with nutch or hadoop?
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 02, 2006 11:38 AM
To: nutch-dev@lucene.apache.org
Subject: nutch
Importance: High
I use nutch 0.8(mapred). Nutch started on
I use nutch 0.8(mapred). Nutch started on 3 servers.
When my nutch try index segment I get error on tasktracker:
060727 215111 task_0025_r_00_1 SEVERE FSError from child
060727 215111 task_0025_r_00_1 org.apache.hadoop.fs.FSError:
java.io.IOException: No space left on device
060727 215111