I'd like to set a lowered block size for a specific file.  IE, if HDFS is
configured to use 64mb blocks, I'd like to use 32mb blocks for a specific
file.

 

Is there a way to do this from the commandline, without writing a jar which
uses org.apache.hadoop.fs.FileSystem.create() ?

 

I tried the following, but it didn't work:

 

hadoop fs -Ddfs.block.size=1048576  -put /local/path /remote/path

 

I also tried -copyFromLocal.  It looks like the -D is being ignored.

 

Thanks.

 

-Ben

 

Reply via email to