yihua commented on a change in pull request #4789:
URL: https://github.com/apache/hudi/pull/4789#discussion_r814429501
##########
File path:
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/AvroConversionUtils.scala
##########
@@ -41,8 +107,8 @@ object AvroConversionUtils {
else {
val schema = new Schema.Parser().parse(schemaStr)
val dataType = convertAvroSchemaToStructType(schema)
- val convertor = AvroConversionHelper.createConverterToRow(schema,
dataType)
- records.map { x => convertor(x).asInstanceOf[Row] }
+ val converter = createConverterToRow(schema, dataType)
Review comment:
Sg. Based on the info provided, in this case, looks like we can depend
on `InternalRow` for now, for performance reasons. For Row writer path, I
think if we can leverage Spark's internal optimization with `InternalRow`
that'd be good, without using `InternalRow` directly in Hudi code.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]