Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17255#discussion_r105835549
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonInferSchema.scala
 ---
    @@ -40,18 +40,11 @@ private[sql] object JsonInferSchema {
           json: RDD[T],
           configOptions: JSONOptions,
           createParser: (JsonFactory, T) => JsonParser): StructType = {
    -    require(configOptions.samplingRatio > 0,
    -      s"samplingRatio (${configOptions.samplingRatio}) should be greater 
than 0")
         val shouldHandleCorruptRecord = configOptions.permissive
         val columnNameOfCorruptRecord = configOptions.columnNameOfCorruptRecord
    -    val schemaData = if (configOptions.samplingRatio > 0.99) {
    -      json
    -    } else {
    -      json.sample(withReplacement = false, configOptions.samplingRatio, 1)
    -    }
    --- End diff --
    
    Because `JsonInferSchema.infer` takes an `RDD[T]` which is the actual 
source to parse JSON strings. In case of whole text, it is 
`RDD[PortableDataStream]` whereas for normal one, it is `RDD[UTF8String]`.
    
    Thing is, it seems there is an advantage of doing the sample operation on 
`Dataset[String]` (not on `RDD`). So, the sample had to be applied onto 
`Dataset[String]` before converting it into `RDD[UTF8String]`.
    
    In a simple view:
    
    - `TextInputJsonDataSource`:
    
      ```scala
      val json: Dataset[String] = ...
      val sampled: Dataset[String] = JsonUtils.sample(...)
      val rdd: RDD[UTF8String] = ...
      JsonInferSchema.infer(rdd)
      ```
    
    - `WholeFileJsonDataSource`:
    
      ```scala
      val json: RDD[PortableDataStream] = ...
      val sampled: RDD[PortableDataStream] = JsonUtils.sample(...)
      JsonInferSchema.infer(rdd)
      ```
    
    I could not find a good way to generalize `JsonInferSchema.infer` to take 
both `Dataset` and `RDD` as the source so that keep the logic within here with 
small and clean changes.
    
    If this question is about why it use `Dataset.sample` instead of 
`RDD.sample`, it is suggested in 
https://github.com/apache/spark/pull/17255#issuecomment-285960658.
    
    Up to my knowledge, both use the sample sampler `BernoulliCellSampler` as 
replacements are disabled but for `Dataset` one, it generates the codes. So, I 
thought there might be a bit of benefits.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to