the-other-tim-brown commented on code in PR #13987:
URL: https://github.com/apache/hudi/pull/13987#discussion_r2389677303
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/common/table/read/TestHoodieFileGroupReaderOnSpark.scala:
##########
@@ -116,8 +118,11 @@ class TestHoodieFileGroupReaderOnSpark extends
TestHoodieFileGroupReaderBase[Int
options: util.Map[String, String],
schemaStr: String): Unit = {
val schema = new Schema.Parser().parse(schemaStr)
- val genericRecords =
spark.sparkContext.parallelize(recordList.asScala.map(_.toIndexedRecord(schema,
CollectionUtils.emptyProps))
- .filter(r => r.isPresent).map(r =>
r.get.getData.asInstanceOf[GenericRecord]).toSeq, 2)
+ val genericRecords : RDD[GenericRecord] =
spark.sparkContext.parallelize(recordList.asScala.map(_.toIndexedRecord(schema,
CollectionUtils.emptyProps))
Review Comment:
There are a few spots in the test code that do a similar function, should we
try to standardize on using the SparkDatasetMixin?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]