lwz9103 commented on code in PR #8256:
URL: https://github.com/apache/incubator-gluten/pull/8256#discussion_r1895340232


##########
cpp-ch/local-engine/Parser/ExpressionParser.cpp:
##########
@@ -433,8 +432,31 @@ const ActionsDAG::Node * 
ExpressionParser::parseExpression(ActionsDAG & actions_
                         "SingularOrList options type mismatch:{} and {}",
                         elem_type->getName(),
                         option_type->getName());
+                options_type_and_field.emplace_back(type_and_field);
+            }
 
-                elem_column->insert(type_and_field.second);
+            // check tuple internal types
+            if (isTuple(elem_type) && isTuple(args[0]->result_type))
+            {
+                // align tuple inner types with nullable
+                auto tuple_type = std::static_pointer_cast<const 
DB::DataTypeTuple>(elem_type);
+                auto result_type = std::static_pointer_cast<const 
DB::DataTypeTuple>(args[0]->result_type);
+                assert(tuple_type->getElements().size() == 
result_type->getElements().size());
+                DataTypes new_types;
+                for (int i = 0; i < tuple_type->getElements().size(); ++i)

Review Comment:
   we can directly use the first tuple type as the type of the tuple to avoid 
nullable mismatch.
   1. Spark guarantees that the types of tuples in the 'in' filter are 
completely consistent. See 
org.apache.spark.sql.types.DataType#equalsStructurally
   2. Additionally, the mapping from Spark types to ClickHouse types is 
one-to-one, See TypeParser.cpp
                  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to