Please try  -D dfs.block.size=4096000
The specification must be in bytes.

On Tue, May 5, 2009 at 4:47 AM, Christian Ulrik Søttrup <soett...@nbi.dk>wrote:

> Hi all,
>
> I have a job that creates very big local files so i need to split it to as
> many mappers as possible. Now the DFS block size I'm
> using means that this job is only split to 3 mappers. I don't want to
> change the hdfs wide block size because it works for my other jobs.
>
> Is there a way to give a specific file a different block size. The
> documentation says it is, but does not explain how.
> I've tried:
> hadoop dfs -D dfs.block.size=4M -put file  /dest/
>
> But that does not work.
>
> any help would be apreciated.
>
> Cheers,
> Chrulle
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to