How to remove the multiple datanodes dynamically from the masternode without
stopping it?
--
View this message in context:
http://old.nabble.com/Stopping-datanodes-dynamically-tp30804859p30804859.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
I think if you add datanode host value for the key dfs.hosts.exclude in hdfs
conf file and refresh nodes [hadoop dfsadmin -refreshNodes]..might work.Thanks,
On 1/31/11 2:42 PM, Jam Ram hadoo...@gmail.com wrote:
How to remove the multiple datanodes dynamically from the masternode without
Rishi,
Using exclude list for TT will not help as Koji has already mentioned
It'll help a bit in a sense that no more tasks are assigned to that TaskTracker
once excluded.
As for TT decommissioning and map outputs handling, opened a Jira for further
discussion.
Yes, this is exactly what I observed. reading is another problem. Thanks.
Best,
Da
On 01/30/2011 05:25 PM, Jeff Hammerbacher wrote:
Hey Da,
You may have observed https://issues.apache.org/jira/browse/HDFS-1601.
Regards,
Jeff
On Fri, Jan 28, 2011 at 7:08 PM, Da Zhengzhengda1...@gmail.com
Hi Koji,
Thanks for opening feature request. Right now for the purpose
stated earlier
I have upgraded to hadoop to 0.21. , and trying to see if creating
individual leaf level queues for every tasktracker and changing the state of
it to 'stopped' before the expiry of the walltime. Seems
Still need to figure out whether a queue can be associated with a TT. i.e.
TT acl for a queue
in which tasks submitted to that queue will only be relayed to TT in the acl
list for the queue.
On Mon, Jan 31, 2011 at 10:51 PM, rishi pathak mailmaverick...@gmail.comwrote:
Hi Koji,
Hi Rishi,
P.S. - What credentials are required for commenting on an issue in Jira
It's open source. I'd say none :)
My feature request is for a regular hadoop clusters whereas yours is pretty
unique.
Not sure if that Jira applies to your need or not.
Koji
On 1/31/11 9:21 AM, rishi pathak
Hey All,
I am trying to install hadoop 0.20.2 and proceeded till daemon-start-up
bin/start-all.sh.
But I see no datanode created.
jps output:
43323 JobTracker
43281 SecondaryNameNode
43162 NameNode
43403 Jps
43381 TaskTracker
19747
Named node log:
2011-01-31 15:08:47,343 INFO
Hi all,
I was wondering if any of you have had a similar experience working with
Hadoop in Amazon's environment. I've been running a few jobs over the last
few months and have noticed them taking more and more time. For instance, I
was running teragen/terasort/teravalidate as a benchmark and
[StumbleUpon]
(http://www.stumbleupon.com/to/e/850951232:QZLQK980KP91TLXQ/invite-4/home1/https%253A%252F%252Fwww.stumbleupon.com%252F%253Fpre%253Dinvite%2526u%253Dsori)
Join for free
Check your DataNode logs under $HADOOP_HOME/logs/. It would have the
reason why it did not start.
You can also issue `hadoop datanode` and watch the exceptional movie play.
On Tue, Feb 1, 2011 at 4:46 AM, sharath jagannath
sharathjagann...@gmail.com wrote:
Hey All,
I am trying to install
Hi, I have setup a Hadoop cluster as per the instructions for CDH3. When I
try to start the datanode on the slave, I get this error,
org.apache.hadoop.hdfs.server.datanode.DataNode:
java.lang.IllegalArgumentException: Invalid URI for NameNode address
(check fs.defaultFS): file:/// has no
Hi,
Double check that your configuration XML files are well-formed. You can do
this easily using a validator like tidy. My guess is that one of the tags
is mismatched so the configuration isn't being read.
-Todd
On Mon, Jan 31, 2011 at 9:19 PM, danoomistmatiste kkhambadk...@yahoo.comwrote:
13 matches
Mail list logo