Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17719#discussion_r112804550
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
    @@ -68,6 +68,18 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
       }
     
       /**
    +   * Specifies the schema by using the input DDL-formatted string. Some 
data sources (e.g. JSON) can
    +   * infer the input schema automatically from data. By specifying the 
schema here, the underlying
    +   * data source can skip the schema inference step, and thus speed up 
data loading.
    +   *
    +   * @since 2.3.0
    +   */
    +  def schema(schemaString: String): DataFrameReader = {
    +    this.userSpecifiedSchema = Option(StructType.fromDDL(schemaString))
    --- End diff --
    
    This change will make PySpark API inconsistent with the Scala API


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to