Thanks. I failed to add: It should be okay to do if those cases are
true and the cluster seems under-utilized right now.

On Fri, May 10, 2013 at 8:29 PM, yypvsxf19870706
<[email protected]> wrote:
> Hi harsh
>
> Yep.
>
>
>
> Regards
>
>
>
>
>
>
> 发自我的 iPhone
>
> 在 2013-5-10,13:27,Harsh J <[email protected]> 写道:
>
>> Are you looking to decrease it to get more parallel map tasks out of
>> the small files? Are you currently CPU bound on processing these small
>> files?
>>
>> On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <[email protected]> 
>> wrote:
>>> hi ALL
>>>
>>>     I am going to setup a new hadoop  environment, .Because  of  there  are
>>> lots of small  files, I would  like to change  the  default.block.size to
>>> 16MB
>>> other than adopting the ways to merge  the files into large  enough (e.g
>>> using  sequencefiles).
>>>    I want to ask are  there  any bad influences or issues?
>>>
>>> Regards
>>
>>
>>
>> --
>> Harsh J



-- 
Harsh J

Reply via email to