[ 
https://issues.apache.org/jira/browse/HDFS-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-2216.
----------------------------------

    Resolution: Invalid

Hey Gabriel, Apache JIRA is for tracking confirmed bugs/improvements/features, 
not for answering user questions like this one. This question would be better 
asked on a mailing list like hdfs-u...@hadoop.apache.org.

That said, you should know that the dfs.blocksize configuration option is, 
perhaps surprisingly, only read by HDFS clients. Can you confirm that this 
setting was set on the client machine from where you were uploading the file(s) 
into HDFS?

> dfs.blocksize and dfs.block.size are ignored
> --------------------------------------------
>
>                 Key: HDFS-2216
>                 URL: https://issues.apache.org/jira/browse/HDFS-2216
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20.2
>         Environment: Linux Ubuntu 
>            Reporter: Gabriel Eisbruch
>
> Hi everybody, 
> I have a big problem with the blocksize configuration I tried different 
> configurations at the hdfs-site config (datanodes and namenodes have the same 
> configuration)
> i tried 
> <property>
>   <name>dfs.block.size</name>
>   <value>128m</value>
>   <final>true</final>
> </property>
> -----------------------
> <property>
>   <name>dfs.blocksize</name>
>   <value>128m</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.block.size</name>
>   <value>134217728</value>
>   <final>true</final>
> </property>
> -----------------------
> <property>
>   <name>dfs.blocksize</name>
>   <value>134217728</value>
>   <final>true</final>
> </property>
> But in all cases when I run a map-reduce job i found that the amount of slots 
> is proportional to 64M blocks and if a run a du -hs in all datanode a found 
> that the block files are 65M
> For example 
> 65M   blk_720821373677199742
> 520K  blk_720821373677199742_13833.meta
> 65M   blk_-7294849724164540020
> 520K  blk_-7294849724164540020_7314.meta
> 65M   blk_7468624346905314857
> 520K  blk_7468624346905314857_7312.meta
> 65M   blk_7638666943543421576
> 520K  blk_7638666943543421576_7312.meta
> 65M   blk_7830551307355288414
> 520K  blk_7830551307355288414_7314.meta
> 65M   blk_7844142950685471855
> 520K  blk_7844142950685471855_7312.meta
> 65M   blk_7978753697206960302
> 520K  blk_7978753697206960302_7312.meta
> 65M   blk_-7997715050017508513
> 520K  blk_-7997715050017508513_7313.meta
> 65M   blk_-8085168141075809653
> 520K  blk_-8085168141075809653_7314.meta
> 65M   blk_8250324684742742886
> 520K  blk_8250324684742742886_7314.meta
> 65M   blk_839132493383742510
> 520K  blk_839132493383742510_7312.meta
> 65M   blk_847712434829366950
> 520K  blk_847712434829366950_7314.meta
> 65M   blk_-8735461258629196142
> but if I run "hadoop fs -stat %o file" I get 134217728
> Do yo know if I am doing same wrong? or It's souns a bug
> Thanks

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to