[ https://issues.apache.org/jira/browse/SPARK-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177617#comment-14177617 ]
Josh Rosen commented on SPARK-4019: ----------------------------------- This issue is caused by a bug in HighlyCompressedMapStatus. I think that's we're compressing a bunch of blocks whose average size is small, so this gets averaged down to zero. As a result, we skip these blocks as empty even though they contain data. I'm going to work on a fix ASAP, but first I'm going to use ScalaCheck to write a property-based test that would have caught this. The invariant that we need to maintain: "if an uncompressed map output size is greater than zero, then compressing and decompressing should continue to report the map output as non-empty." > Repartitioning with more than 2000 partitions drops all data > ------------------------------------------------------------ > > Key: SPARK-4019 > URL: https://issues.apache.org/jira/browse/SPARK-4019 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.2.0 > Reporter: Xiangrui Meng > Assignee: Josh Rosen > Priority: Blocker > > {code} > sc.makeRDD(0 until 10, 1000).repartition(2001).collect() > {code} > returns `Array()`. > 1.1.0 doesn't have this issue. Tried both HASH and SORT manager. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org