I am using a cluster of mixed hardware, 32-bit and 64-bit machines, to run
Hadoop 0.18.3. I can't use the distribution tar ball since I need to apply
a couple of patches. So I build my own Hadoop binaries after applying the
patches that I need. I build two copies, one for 32-bit machines and
Owen, thanks for pointing out that Jira.
Bill
On Mon, Apr 6, 2009 at 7:20 PM, Owen O'Malley omal...@apache.org wrote:
This was discussed over on:
https://issues.apache.org/jira/browse/HADOOP-5203
Doug uploaded a patch, but no one seems to be working on it.
-- Owen
All the heartbeat and timeout interval are configurable. So you don't need
to decommission a host explicitly. You can configure both the namenode and
the tasktracker to detect a failed host sooner. If you decommission a host,
you will have to explicitly put it back into the cluster.
Bill
On
Not sure why you would want a node be a datanode but not a tasktracker
because you normally would want the map/reduce task to run where the data is
stored.
Bill
On Wed, Apr 1, 2009 at 3:58 AM, Sandhya E sandhyabhas...@gmail.com wrote:
Hi
When the host is listed in slaves file both DataNode
The time is configurable:
heartbeat.recheck.interval
dfs.heartbeat.interval
Bill
On Wed, Mar 25, 2009 at 6:00 AM, stchu stchu.cl...@gmail.com wrote:
Hi,
I do a test about the datanode crash. I stop the networking on one of the
datanode.
The Web app and fsck report that datanode dead after
this
feature, how do you retrieve the presistended completed jobs status?
Bill
On Wed, Feb 18, 2009 at 10:48 PM, Amareshwari Sriramadasu
amar...@yahoo-inc.com wrote:
Bill Au wrote:
I have enabled persistent completed jobs status and can see them in HDFS.
However, they are not listed
I have enabled persistent completed jobs status and can see them in HDFS.
However, they are not listed in the jobtracker's UI after the jobtracker is
restarted. I thought that jobtracker will automatically look in HDFS if it
does not find a job in its memory cache. What am I missing? How to I
I see that there is a patch for the fair scheduler for 0.18.1 in
HADOOP-3746. Does anyone know if there is a similar patch for the capacity
scheduler? I did a search on JIRA but didn't find anything.
Bill
anyone know
if this is already in the works?
Bill
On Mon, Feb 2, 2009 at 5:00 PM, Bill Au bill.w...@gmail.com wrote:
It looks like the behavior is the same with 0.18.2 and 0.19.0. Even though
I removed the decommissioned node from the exclude file and run the
refreshNode command
.
You have to enable the config in hadoop-default.xml
namewebinterface.private.actions/name to be true
Best
Bhupesh
On 1/30/09 3:23 PM, Bill Au bill.w...@gmail.com wrote:
Thanks.
Anyone knows if there is plan to add this functionality to the web UI
like
job priority can be changed
.
Bill
On Thu, Jan 29, 2009 at 5:40 PM, Bill Au bill.w...@gmail.com wrote:
Not sure why but this does not work for me. I am running 0.18.2. I ran
hadoop dfsadmin -refreshNodes after removing the decommissioned node from
the exclude file. It still shows up as a dead node. I also removed
JavaOne is scheduled for the first week on June this year. Please keep that
in mind since I am guessing I am not the only one who are interested in
both.
Bill
On Thu, Jan 29, 2009 at 7:45 PM, Ajay Anand aan...@yahoo-inc.com wrote:
Yes! We are planning one for the first week of June. I will be
You actually have to set JAVA_HOME to where Java is actually installed on
your system. /usr/lib/jvm/default-java is just an example. The error
messages indicate that that's not where Java is installed on your system.
Bill
On Fri, Jan 30, 2009 at 5:09 PM, zander1013 zander1...@gmail.com wrote:
Is there any way to cancel a job after it has been submitted?
Bill
Thanks.
Anyone knows if there is plan to add this functionality to the web UI like
job priority can be changed from both the command line and the web UI?
Bill
On Fri, Jan 30, 2009 at 5:54 PM, Arun C Murthy a...@yahoo-inc.com wrote:
On Jan 30, 2009, at 2:41 PM, Bill Au wrote:
Is there any
Did you start your namenode with the -upgrade after upgrading from 0.18.1 to
0.19.0?
Bill
On Mon, Jan 26, 2009 at 8:18 PM, Yuanyuan Tian yt...@us.ibm.com wrote:
Hi,
I just upgraded hadoop from 0.18.1 to 0.19.0 following the instructions on
http://wiki.apache.org/hadoop/Hadoop_Upgrade.
I was able to decommission a datanode successfully without having to stop my
cluster. But I noticed that after a node has been decommissioned, it shows
up as a dead node in the web base interface to the namenode (ie
dfshealth.jsp). My cluster is relatively small and losing a datanode will
have
Is hadoop designed to run on homogeneous hardware only, or does it work just
as well on heterogeneous hardware as well? If the datanodes have different
disk capacities, does HDFS still spread the data blocks equally amount all
the datanodes, or will the datanodes with high disk capacity end up
There is a secondary NameNode which performs periodic checkpoints:
http://wiki.apache.org/hadoop/FAQ?highlight=(secondary)#7
Are there any instructions out there on how to copy the FS image and edits
log from the secondary NameNode to a new machine when the original NameNode
fails?
Bill
On
19 matches
Mail list logo