Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22745#discussion_r227037566
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -278,24 +278,20 @@ object JavaTypeInference {
case _ if mapType.isAssignableFrom(typeToken) =>
val (keyType, valueType) = mapKeyValueType(typeToken)
- val keyDataType = inferDataType(keyType)._1
- val valueDataType = inferDataType(valueType)._1
val keyData =
Invoke(
- MapObjects(
+ UnresolvedMapObjects(
p => deserializerFor(keyType, Some(p)),
- Invoke(getPath, "keyArray", ArrayType(keyDataType)),
- keyDataType),
+ UnresolvedGetArrayFromMap(getPath, GetArrayFromMap.Key())),
--- End diff --
Yea we can write eval and doGenCode from scratch. It's also more efficient
since we can omit the useless try-catch in `Invoke`.
e.g.
```
// from UnaryExpression
override def nullSafeEval(input: Any) = input.asInstanceOf[MapData].keys
override def doGenCode = defineCodeGen(ctx, ev, c => s"$c.keys")
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]