maropu commented on a change in pull request #29421:
URL: https://github.com/apache/spark/pull/29421#discussion_r469889057
##########
File path:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveScriptTransformationExec.scala
##########
@@ -274,6 +275,14 @@ object HiveScriptIOSchema extends HiveInspectors {
var propsMap = serdeProps.toMap + (serdeConstants.LIST_COLUMNS ->
columns.mkString(","))
propsMap = propsMap + (serdeConstants.LIST_COLUMN_TYPES ->
columnTypesNames)
+ if
(!propsMap.contains(serdeConstants.SERIALIZATION_LAST_COLUMN_TAKES_REST)) {
Review comment:
hm, if we implement spark-native `LazySimpleSerde`-like serde in
follow-up activities, the answers below will change accordingly?
https://github.com/apache/spark/pull/29414/files#diff-01228f8ade90c259db2dfdf31ea8a5d1R60-R71
Either way, we need to update the comment there and we should fix
`java.lang.ArrayIndexOutOfBoundsException` somewhere.
```
-- SPARK-32388 handle schema less
SELECT TRANSFORM(a)
USING 'cat'
FROM t;
```
https://github.com/apache/spark/pull/29414/files#diff-4c9e72a08bc55c74dfcf8b000bfa8bd8R103-R111
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]