pvary commented on code in PR #12520:
URL: https://github.com/apache/iceberg/pull/12520#discussion_r1993976998
##########
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java:
##########
@@ -84,7 +86,12 @@ protected void writeAndValidate(Schema writeSchema, Schema
expectedSchema) throw
File location = new File(parent, "test");
HadoopTables tables = new HadoopTables(CONF);
- Table table = tables.create(writeSchema, PartitionSpec.unpartitioned(),
location.toString());
+ Table table =
+ tables.create(
+ writeSchema,
+ PartitionSpec.unpartitioned(),
+ ImmutableMap.of(TableProperties.FORMAT_VERSION, "3"),
+ location.toString());
Review Comment:
This should be checked.
There are some tests, like TestParquetVectorizedReads (and 12 others) that
contain schema where V3 specific (withInitialDefault) properties are provided
during table creation. These tests need to create a table with V3 format
version.
We have a few options here:
1. Move every test to V3
2. Run every test on V2 and V3 as well, and disable tests where the version
is V2 and we use V3 spec
3. Keep every test on V2 and move only tests where the V3 spec is used to
create tables with V3 spec
I opted for 1. to test out and see if there are other failures, but we might
reconsider
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]