precedence, I no longer get the hadoop disk
> error. While I though this means the problem comes from the nutch script,
> not the code itself, manually trying to set system calls
> to /opt/freeware/bin didn't fix it. I assume until detailed debugging is
> done, further releases w
Sending this out to close the thread if anyone else experiences this
problem: nutch 1.0 is not AIX-friendly (0.9 is).
I'm not 100% sure which command it may be, but by modifying my path so
that /opt/freeware/bin has precedence, I no longer get the hadoop disk
error. While I though this
-|
|>
| Subject: |
|>
>--------
n 2 GB are far from impressing. Why don't you switch
> hadoop.tmp.dir to a place with, say, 50
>
>
> From:
>
> To:
>
> Date:
> 04/20/2010 06:30 PM
> Subject:
> RE: Hadoop Disk Error
> --
>
>
>
> 1 or even 2 GB are far
---|
|>
| Subject: |
|>
>------
AM
To: nutch-user@lucene.apache.org
Subject: Re: Hadoop Disk Error
Yes - how much free space does it need? We ran 0.9 using /tmp, and that has ~ 1
GB. After I first saw this error, I moved it to another filesystem where I have
2 GB free (maybe not "gigs and gigs", but more than I think I n
--|
|04/20/2010 01:41 PM
|
>--|
|>
| Subject: |
|>
>------
|
|>
| Subject: |
|>
>-----
porary directory used b]---04/19/2010
> 05:53:53 PM---Are you sure that you have enough space in the temporary
> directory used by Hadoop? From: Joshua J Pa
>
>
> From:
>
> To:
>
> Date:
> 04/19/2010 05:53 PM
> Subject:
> RE: Hadoop Disk Error
>
---|
|>
| Subject: |
|>
>------
---|
|>
| Subject: |
|>
>------
Are you sure that you have enough space in the temporary directory used by
Hadoop?
From: Joshua J Pavel [mailto:jpa...@us.ibm.com]
Sent: Tuesday, 20 April 2010 6:42 AM
To: nutch-user@lucene.apache.org
Subject: Re: Hadoop Disk Error
Some more information, if anyone can help:
If I turn
--|
|>
| Subject: |
|>
>-------
--|
|>
| Subject: |
|>
>-------
We're just now moving from a nutch .9 installation to 1.0, so I'm not
entirely new to this. However, I can't even get past the first fetch now,
due to a hadoop error.
Looking in the mailing list archives, normally this error is caused from
either permissions or a full disk. I overrode the use
15 matches
Mail list logo