Github user cloud-fan commented on the issue: https://github.com/apache/spark/pull/16479 Everything in package `org.apache.spark.sql.execution` should be internal to Spark SQL. Technically you can still implement `OutputWriter` outside of Spark, but there is no guarantee about the stability. Ideally we should not change any interface if unnecessary, but this change is reasonable. As an internal interface, it's more efficient to use `InternalRow` directly, instead of converting `InternalRow` to `Row` and then operate on `Row`. I'm sorry that this breaks spark-avro, but we can make spark-avro more efficient by switching to the new interface. Or we can just copy the previous conversion code to spark-avro, so that we can still covert `InternalRow` to `Row` and operate on `Row` in spark-avro.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org