[
https://issues.apache.org/jira/browse/SPARK-17417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15468519#comment-15468519
]
Sean Owen commented on SPARK-17417:
-----------------------------------
I'd bump the padding to allow 10 digits, because that would accommodate a
32-bit int, and having that many partitions would cause other things to fail.
As long as the parsing code can read the 'old format' too, should work fine.
> Fix # of partitions for RDD while checkpointing - Currently limited by
> 10000(%05d)
> ----------------------------------------------------------------------------------
>
> Key: SPARK-17417
> URL: https://issues.apache.org/jira/browse/SPARK-17417
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Reporter: Dhruve Ashar
>
> Spark currently assumes # of partitions to be less than 100000 and uses %05d
> padding.
> If we exceed this no., the sort logic in ReliableCheckpointRDD gets messed up
> and fails. This is because of part-files are sorted and compared as strings.
> This leads filename order to be part-10000, part-100000, ... instead of
> part-10000, part-10001, ..., part-100000 and while reconstructing the
> checkpointed RDD the job fails.
> Possible solutions:
> - Bump the padding to allow more partitions or
> - Sort the part files extracting a sub-portion as string and then verify the
> RDD
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]