aokolnychyi commented on code in PR #8972:
URL: https://github.com/apache/iceberg/pull/8972#discussion_r1380712826
##########
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteManifestsAction.java:
##########
@@ -406,12 +406,13 @@ public void testRewriteLargeManifestsPartitionedTable()
throws IOException {
// all records belong to the same partition
List<ThreeColumnRecord> records = Lists.newArrayList();
- for (int i = 0; i < 50; i++) {
+ // use enough records to split metadata as the rolling writer checks
length once per 250 entries
+ for (int i = 0; i < 1500; i++) {
records.add(new ThreeColumnRecord(i, String.valueOf(i), "0"));
}
Dataset<Row> df = spark.createDataFrame(records, ThreeColumnRecord.class);
// repartition to create separate files
- writeDF(df.repartition(50, df.col("c1")));
+ writeDF(df.repartition(1500, df.col("c1")));
Review Comment:
Yep, let me check. It may actually be too slow but I need a way to reliably
get past length check in the manifest writer. Maybe, I can generate a fake
manifest instead.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]