wjones127 commented on code in PR #13088:
URL: https://github.com/apache/arrow/pull/13088#discussion_r868114504
##########
cpp/src/arrow/dataset/dataset_test.cc:
##########
@@ -107,6 +111,38 @@ TEST_F(TestInMemoryDataset, InMemoryFragment) {
AssertSchemaEqual(batch->schema(), schema);
}
+TEST_F(TestInMemoryDataset, HandlesDifferingSchemas) {
+ constexpr int64_t kBatchSize = 1024;
+
+ // These schemas can be merged
+ SetSchema({field("i32", int32()), field("f64", float64())});
+ auto batch1 = ConstantArrayGenerator::Zeroes(kBatchSize, schema_);
+ SetSchema({field("i32", int64())});
+ auto batch2 = ConstantArrayGenerator::Zeroes(kBatchSize, schema_);
+ RecordBatchVector batches{batch1, batch2};
+
+ auto dataset = std::make_shared<InMemoryDataset>(schema_, batches);
Review Comment:
In file fragments, it's totally normal to have a physical schema that is
different from the dataset schema.
This came up when I realized we could create a union dataset out of
filesystem ones but not in-memory ones if the schemas differed.
> The other way (arguably) would be to have ReplaceSchema project the
batches (though that is a lot more work).
I thought about that, but then are we materializing the projected batches
before any scan is started? It seems more efficient for the projection to
happen as part of the scan.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]