Neither one was working.

Is there anything I can do? I always have problems like this in hdfs. It
seems even experts are guessing at the answers :-/


On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

> conf/hdfs-site.*xml*
>
> restart dfs. I believe it should be sufficient to restart the namenode
> only, but others can confirm.
>
> -Ayon
>
> ------------------------------
> *From:* Rita <rmorgan...@gmail.com>
> *To:* hdfs-user@hadoop.apache.org
> *Sent:* Thu, February 3, 2011 4:35:09 AM
> *Subject:* changing the block size
>
> Currently I am using the default block size of 64MB. I would like to change
> it for my cluster to 256 megabytes since I deal with large files (over
> 2GB).  What is the best way to do this?
>
> What file do I have to make the change on? Does it have to be applied on
> the namenode or each individual data nodes?  What has to get restarted,
> namenode, datanode, or both?
>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>
>


-- 
--- Get your facts first, then you can distort them as you please.--

Reply via email to