Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/11960#discussion_r57599706
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JSONRelation.scala
---
@@ -120,6 +122,39 @@ class DefaultSource extends FileFormat with
DataSourceRegister {
}
}
+ override def buildReader(
+ sqlContext: SQLContext,
+ partitionSchema: StructType,
+ dataSchema: StructType,
+ filters: Seq[Filter],
+ options: Map[String, String]): PartitionedFile =>
Iterator[InternalRow] = {
+ val conf = new
Configuration(sqlContext.sparkContext.hadoopConfiguration)
+ val broadcastedConf =
+ sqlContext.sparkContext.broadcast(new
SerializableConfiguration(conf))
+
+ val parsedOptions: JSONOptions = new JSONOptions(options)
+ val columnNameOfCorruptRecord = parsedOptions.columnNameOfCorruptRecord
+ .getOrElse(sqlContext.conf.columnNameOfCorruptRecord)
+
+ val fullSchema = dataSchema.toAttributes ++
partitionSchema.toAttributes
+ val joinedRow = new JoinedRow()
+
+ file => {
+ val lines = new HadoopFileLinesReader(file,
broadcastedConf.value.value).map(_.toString)
+
+ val rows = JacksonParser.parseJson(
+ lines,
+ dataSchema,
--- End diff --
@yhuai @cloud-fan Is it OK if the schema passed to `parseJson` is different
but compatible to the underlying input data files? I assume yes? Otherwise we
might need the `physicalSchema` argument added in #12002.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]