dongjoon-hyun commented on a change in pull request #24046: [MINOR][SQL] Throw 
better exception for Encoder with tuple more than 22 elements
URL: https://github.com/apache/spark/pull/24046#discussion_r264080476
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
 ##########
 @@ -80,7 +80,11 @@ object ExpressionEncoder {
    * name/positional binding is preserved.
    */
   def tuple(encoders: Seq[ExpressionEncoder[_]]): ExpressionEncoder[_] = {
-    // TODO: check if encoders length is more than 22 and throw exception for 
it.
+    if (encoders.length > 22) {
 
 Review comment:
   @maropu . There is `Dataset.selectUntyped` using 
`ExpressionEncoder.tuple(encoders)`. That is also internal helper function 
which is uses with at most 5 elements, but some users can extend `Dataset` and 
use it with more parameters. BTW, it's true that Apache Spark doesn't guarantee 
anything about that.
   
   @HeartSaVioR . I prefer the current one, but @maropu 's suggestion also look 
good.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to