[
https://issues.apache.org/jira/browse/SPARK-17417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15547849#comment-15547849
]
Sean Owen commented on SPARK-17417:
-----------------------------------
I think you're right, but it should also be no extra work to read a 5-digit
number as well as 10-digit anyway.
I'm not sure about the cleanup logic though that would be a different question.
The dir should probably be left as-is, but perhaps not its contents.
> Fix # of partitions for RDD while checkpointing - Currently limited by
> 10000(%05d)
> ----------------------------------------------------------------------------------
>
> Key: SPARK-17417
> URL: https://issues.apache.org/jira/browse/SPARK-17417
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Reporter: Dhruve Ashar
>
> Spark currently assumes # of partitions to be less than 100000 and uses %05d
> padding.
> If we exceed this no., the sort logic in ReliableCheckpointRDD gets messed up
> and fails. This is because of part-files are sorted and compared as strings.
> This leads filename order to be part-10000, part-100000, ... instead of
> part-10000, part-10001, ..., part-100000 and while reconstructing the
> checkpointed RDD the job fails.
> Possible solutions:
> - Bump the padding to allow more partitions or
> - Sort the part files extracting a sub-portion as string and then verify the
> RDD
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]