The decommission process is for data nodes - which you are not
running. Have a look at the mapred.hosts.exclude property for how to
exclude tasktrackers.

Tom

On Tue, Feb 17, 2009 at 5:31 PM, S D <[email protected]> wrote:
> Thanks for your response. For clarification, I'm using S3 Native instead of
> HDFS. Hence, I'm not even calling start-dfs.sh since I'm not using a
> distributed filesystem. Given such a situation, is decommissioning nodes
> applicable? When I ran 'hadoop dfsadmin -refreshNodes' I received the
> following response:
>
> FileSystem is s3n://<bucketname>
>
> Thanks,
> John
>
> On Tue, Feb 17, 2009 at 4:20 PM, Amandeep Khurana <[email protected]> wrote:
>
>> You have to decommission the node. Look at
>> http://wiki.apache.org/hadoop/FAQ#17
>>
>> Amandeep
>>
>>
>> Amandeep Khurana
>> Computer Science Graduate Student
>> University of California, Santa Cruz
>>
>>
>> On Tue, Feb 17, 2009 at 2:14 PM, S D <[email protected]> wrote:
>>
>> > I have a Hadoop 0.19.0 cluster of 3 machines (storm, mystique, batman).
>> It
>> > seemed as if problems were occurring on mystique (I was noticing errors
>> > with
>> > tasks that executed on mystique). So I decided to remove mystique. I did
>> so
>> > by calling stop-mapred.sh (I'm using S3 Native, not HDFS), removing
>> > mystique
>> > from the $HADOOP_HOME/conf/slaves file on storm and batman. I then called
>> > start-mapred.sh and verified (via the output of start-mapred.sh) that
>> > tasktrackers were started only on batman and storm. When I started my
>> > MapReduce program I viewed the task tracker machine list web interface
>> and
>> > saw that not only was mystique listed as one of the task trackers but
>> that
>> > a
>> > task had been assigned to it. How can I keep a machine from being
>> included
>> > in a cluster?
>> >
>> > Any help is appreciated.
>> >
>> > Thanks,
>> > John
>> >
>>
>

Reply via email to