I will try it tomorrow when I goto class.

Thanks for the quick response! :-)


On Sun, Feb 6, 2011 at 5:31 PM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

> do this test. do a copyFromLocal to create a new file in hdfs . check the
> block size of this new file. it should be 128mb if ur changes took effect.
>
> Sent from my iPhone
>
> On Feb 6, 2011, at 2:24 PM, Rita <rmorgan...@gmail.com> wrote:
>
> Bharath,
> So, I have to restart the entire cluster? So, I need to stop the namenode
> and then run start-dfs.sh ?
>
> Ayon,
> So, what I did was decommission a node, remove all of its data (rm -rf
> data.dir) and stopped the hdfs process on it. Then I made the change to
> conf/hdfs-site.xml on the data node and then I restarted the datanode. I
> then ran a balancer to take effect and I am still getting 64MB files instead
> of 128MB. :-/
>
>
>
>
>
> On Sun, Feb 6, 2011 at 2:25 PM, Bharath Mundlapudi <<bharathw...@yahoo.com>
> bharathw...@yahoo.com> wrote:
>
>> Can you tell us, how are you verifying if its not working?
>>
>> Edit
>> conf/hdfs-site.xml dfs.block.size
>>
>>
>> And restart the cluster.
>>
>> -Bharath
>>
>>
>> *From:* Rita < <rmorgan...@gmail.com>rmorgan...@gmail.com>
>> *To:* <hdfs-user@hadoop.apache.org>hdfs-user@hadoop.apache.org
>> *Cc:*
>> *Sent:* Sunday, February 6, 2011 8:50 AM
>>
>> *Subject:* Re: changing the block size
>>
>> Neither one was working.
>>
>> Is there anything I can do? I always have problems like this in hdfs. It
>> seems even experts are guessing at the answers :-/
>>
>>
>> On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha < <ayonsi...@yahoo.com>
>> ayonsi...@yahoo.com> wrote:
>>
>> conf/hdfs-site.*xml*
>>
>> restart dfs. I believe it should be sufficient to restart the namenode
>> only, but others can confirm.
>>
>> -Ayon
>>
>> *From:* Rita < <rmorgan...@gmail.com>rmorgan...@gmail.com>
>> *To:* <hdfs-user@hadoop.apache.org>hdfs-user@hadoop.apache.org
>> *Sent:* Thu, February 3, 2011 4:35:09 AM
>> *Subject:* changing the block size
>>
>> Currently I am using the default block size of 64MB. I would like to
>> change it for my cluster to 256 megabytes since I deal with large files
>> (over 2GB).  What is the best way to do this?
>>
>> What file do I have to make the change on? Does it have to be applied on
>> the namenode or each individual data nodes?  What has to get restarted,
>> namenode, datanode, or both?
>>
>>
>>
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>>
>>
>>
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>>
>>
>>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>
>


-- 
--- Get your facts first, then you can distort them as you please.--

Reply via email to