[ 
https://issues.apache.org/jira/browse/HAMA-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13655839#comment-13655839
 ] 

MaoYuan Xian commented on HAMA-757:
-----------------------------------

Yes. DFSClient creates files using "dfs.block.size" value as block size 
reference. But, using this way will requires customer or hama job client know 
each partition's size well and set the correct value when creates file 
outputstream.
                
> The partitioning job output should be un-splitable
> --------------------------------------------------
>
>                 Key: HAMA-757
>                 URL: https://issues.apache.org/jira/browse/HAMA-757
>             Project: Hama
>          Issue Type: Bug
>          Components: bsp core
>    Affects Versions: 0.6.1
>            Reporter: MaoYuan Xian
>
> When the output sequence files from partitioning job are large(bigger than 
> two hdfs file block size), the second round of the job (using these sequence 
> file as input) will start up more tasks than client want. Some times, this 
> uncertainty make the job exceed the cluster slot capacity.
> In the real project, I implemented an new Inputformat which marked as 
> un-splitable to solve the problem. Is there any better way?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to