Hi harsh

Yep.



Regards 






发自我的 iPhone

在 2013-5-10,13:27,Harsh J <[email protected]> 写道:

> Are you looking to decrease it to get more parallel map tasks out of
> the small files? Are you currently CPU bound on processing these small
> files?
> 
> On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <[email protected]> 
> wrote:
>> hi ALL
>> 
>>     I am going to setup a new hadoop  environment, .Because  of  there  are
>> lots of small  files, I would  like to change  the  default.block.size to
>> 16MB
>> other than adopting the ways to merge  the files into large  enough (e.g
>> using  sequencefiles).
>>    I want to ask are  there  any bad influences or issues?
>> 
>> Regards
> 
> 
> 
> -- 
> Harsh J

Reply via email to