alamb commented on code in PR #7694:
URL: https://github.com/apache/arrow-datafusion/pull/7694#discussion_r1341620743


##########
datafusion/physical-plan/src/values.rs:
##########
@@ -54,7 +55,16 @@ impl ValuesExec {
         let n_col = schema.fields().len();
         // we have this single row, null, typed batch as a placeholder to 
satisfy evaluation argument
         let batch = RecordBatch::try_new(
-            schema.clone(),
+            // the schema we're using might have non-nullable fields, so we 
need to make them nullable

Review Comment:
   Given the issue appears to be that the schema in the MemoryTable doesn't 
match the schema of all its batches, what would you  think about making this 
change in MemoryTable insert logic itself. 
   
   For example, perhaps after insert it could replace all batches with its own 
schema (perhaps also ensuring the nullability is correct).
   
    
https://github.com/apache/arrow-datafusion/blob/2ffda2a9a893455e55cd773d9dd4f426a61d8cd3/datafusion/core/src/datasource/memory.rs#L262



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to