Hi,
The following segmentation fault still exists.
I re wrote my application to use ant, but when I integrate it with libhdfs
it fails saying segmentation fault and exiting with 139.
Please do help, as I have already spent a lot of time on re writing my
application to use hadoop and this one fails
hello ,
Yes you can do this by specify in hadoop-site.xml about the location of
namenode , where your data is already get distributed.
---
property
namefs.default.name/name
value IPAddress:PortNo /value
/property
Hello,
I'm working with Hadoop 0.16.1. I have an issue with the DFS. Sometimes
when writing to the HDFS it gets blocked. Sometimes it doesn't happen,
so it's not easily reproducible.
My cluster have 4 nodes and one master with the NameNode and JobTracker.
This are the logs that appears when
Exception in receiveBlock for block java.io.IOException: Trying to
change block file offset of block blk_7857709233639057851 to 33357824
but actual size of file is 33353728
This was fixed in HADOOP-3033. You can try running latest 0.16 branch
(svn...hadoop/core/branches/branch-016). 0.16.2
Hello
I'm not sure I've understood...actually I've already set this field in
the configuration file. I think this field is just to specify the master
for the HDFS.
My problem is that I have many machines with, on each one, a bunch of
files which represent the distributed data ... and I want to
make that xen kernels..
btw, they scale much better (see previous post) under heavy load. So
now instead of timeouts and dropped connections, jvm instances exit
prematurely. unsure of the cause of this just yet. but its so few, the
impact is negligible.
ckw
On Mar 28, 2008, at 10:00 AM,
Doug Cutting wrote:
Seems like we should force things onto the same availablity zone by
default, now that this is available. Patch, anyone?
It's already there! I just hadn't noticed.
https://issues.apache.org/jira/browse/HADOOP-2410
Sorry for missing this, Chris!
Doug
Hey everyone,
I'm having a similar problem:
Map output lost, rescheduling:
getMapOutput(task_200803281212_0001_m_00_2,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
task_200803281212_0001_m_00_2/file.out.index in any of the
configured local directories
Hi,
Thanks for your suggestions.
It looks like the problem is with firewall, I created the firewall rule to
allow these ports 5 to 50100 (I found in these port range hadoop was
listening)
Looks like I am missing some ports and that gets blocked in the firewall.
Could anyone please let me
Also, I'm running hadoop 0.16.1 :)
On Fri, Mar 28, 2008 at 1:23 PM, Bradford Stephens
[EMAIL PROTECTED] wrote:
Hey everyone,
I'm having a similar problem:
Map output lost, rescheduling:
getMapOutput(task_200803281212_0001_m_00_2,0) failed :
Hi Bradford,
Could you please check what your mapred.local.dir is set to?
Devaraj.
-Original Message-
From: Bradford Stephens [mailto:[EMAIL PROTECTED]
Sent: Saturday, March 29, 2008 1:54 AM
To: core-user@hadoop.apache.org
Cc: [EMAIL PROTECTED]
Subject: Re: hadoop 0.15.3 r612257
Thanks for the hint, Deveraj! I was using paths for the
mapred.local.dir that was based on ~/, so I gave it an absolute path
instead. Also, the directory for hadoop.tmp.dir did not exist on one
machine :)
On Fri, Mar 28, 2008 at 2:00 PM, Devaraj Das [EMAIL PROTECTED] wrote:
Hi Bradford,
Could
Anyone have experience running a production cluster on Open Solaris? The
advantage of course is the availability of ZFS, but I haven't seen much in
the way of people on the list mentioning they use Open Solaris.
Thanks, pete
Hello,
I have been trying to run Hadoop on a set of small text files, not larger
than 10k each. The total input size is 15MB. If I try to run the example
word count application, it takes about 2000 seconds, more than half an hour
to complete. However, if I merge all the files into one large file,
14 matches
Mail list logo