[
https://issues.apache.org/jira/browse/CRUNCH-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909754#comment-13909754
]
Gabriel Reid commented on CRUNCH-351:
-------------------------------------
{quote}
I think a constant random seed is effectively same as using a increasing key
and passing records to reducers in round-robin. The general drawback is that
all mapper will produce the same sequence. For this particular problem, I think
using the round-robin approach is OK and simpler.
{quote}
That makes a lot of sense. What I actually had in mind about switching to int
was to use a way smaller range of keys, to do something like
{code}
count = (++count % (numPartitions * 3));
{code}
with the idea of having a really small number of different keys so that sorting
the keys within each partition would require almost no processing. On the other
hand, that idea is likely such a micro-optimization that it wouldn't make any
noticeable difference, so what you've got here looks good to me.
> Improve performance of Shard#shard on large records
> ---------------------------------------------------
>
> Key: CRUNCH-351
> URL: https://issues.apache.org/jira/browse/CRUNCH-351
> Project: Crunch
> Issue Type: Improvement
> Reporter: Chao Shi
> Assignee: Chao Shi
> Attachments: crunch-351-v2.patch, crunch-351.patch
>
>
> This avoids sorting on the input data, which may be long and make
> shuffle phase slow. The improvement is to sort on pseudo-random numbers.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)