Hi Hadoopers,
we are experiencing a lot of Could not obtain block / Could not get
block locations IOExceptions when processing a 400 GB large Map/Red job
using our 6 nodes DFS MapRed (v. 0.16.4) cluster. Each node is
equipped with a 400GB Sata HDD and running Suse Linux Enterprise
Edition.
are you running on
the namenode? How many cores does your machines have?
thanks,
dhruba
On Fri, May 16, 2008 at 6:02 AM, André Martin [EMAIL PROTECTED] wrote:
Hi Hadoopers,
we are experiencing a lot of Could not obtain block / Could not get block
locations IOExceptions when processing a 400
H Cagdas,
simply adjust your log4.properties (needs to be in CLASSPATH of your
DFCClient app):
log4j.logger.org.apache.hadoop=DEBUG
Cu on the 'net,
Bye - bye,
André èrbnA
Cagdas Gerede wrote:
How do you set DFSClient's log to
Hi Cagdas Michael,
our cluster works fine with no crashes so far even with more than one
million files - we have 11 datanodes and one namenode...
Cu on the 'net,
Bye - bye,
André èrbnA
Michael Bieniosek wrote:
From my experience,
heartbeat. That's the reason a small number (like 100) was chosen.
If you have 8 datanodes, your system will probably delete about 800 blocks
every 3 seconds.
Thanks,
dhruba
-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED]
Sent: Friday, March 21, 2008 3:06 PM
To: core-user
Hi everyone,
I ran a distributed system that consists of 50 spiders/crawlers and 8
server nodes with a Hadoop DFS cluster with 8 datanodes and a namenode...
Each spider has 5 job processing / data crawling threads and puts
crawled data as one complete file onto the DFS - additionally there are
Hi everyone,
the namenode doesn't re-start properly:
2008-03-02 01:25:25,120 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = se09/141.76.xxx.xxx
STARTUP_MSG: args = []
)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java:1479)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1571)
André Martin wrote:
Hi everyone,
I'm seeing the above exception on my DFS clients:
org.apache.hadoop.ipc.RemoteException
Hi everyone,
I applied the patch provided at
https://issues.apache.org/jira/browse/HADOOP-2873 and my
namenode/cluster is up again without formatting it :-) Thanks for the
quick help!
Reading and writing to the cluster seems to work fine except for MapRed
jobs: All counters say 0 even I can
Hi everyone,
I observed an inconsistent behavior:
When I use fs.rename with a local filesystem and the target dir does not
exist, it will be created automatically.
But when I run the same code using DFS as underlaying FS, the file(s)
won't be moved, and there is no IOException etc. - basically
(let me know if need help with that). Did
subsequent tries to restart succeed?
Thanks,
Raghu.
André Martin wrote:
Hi everyone,
I downloaded the nightly build (see below) yesterday and after the
cluster worked fine for about 10 hours I got the following
error message from the DFS client even
11 matches
Mail list logo