Github user hvanhovell commented on a diff in the pull request: https://github.com/apache/spark/pull/20756#discussion_r172801767 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala --- @@ -1261,8 +1261,24 @@ case class InitializeJavaBean(beanInstance: Expression, setters: Map[String, Exp override def children: Seq[Expression] = beanInstance +: setters.values.toSeq override def dataType: DataType = beanInstance.dataType - override def eval(input: InternalRow): Any = - throw new UnsupportedOperationException("Only code-generated evaluation is supported.") + override def eval(input: InternalRow): Any = { + val instance = beanInstance.eval(input).asInstanceOf[Object] + if (instance != null) { + setters.foreach { case (setterMethod, fieldExpr) => --- End diff -- Why are we resolving setters during `eval`? That seems a bit expensive. How about we create the setters before we execute eval? For example (I got a bit carried away): ```scala private lazy val resolvedSetters = { val ObjectType(beanClass) = beanInstance.dataType val lookup = MethodHandles.lookup() setters.map { case (name, expr) => // Resolve expression type (should be better!) val fieldClass = CallMethodViaReflection.typeMapping(expr.dataType).head val handle = lookup.findVirtual( beanClass, name, MethodType.methodType(classOf[Unit], fieldClass)) handle -> expr } } override def eval(input: InternalRow): Any = { val bean = beanInstance.eval(input) resolvedSetters.foreach { case (setter, expr) => setter.invoke(bean, expr.eval(input)) } bean } ```
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org