Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17719#discussion_r112805934
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
    @@ -68,6 +68,18 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
       }
     
       /**
    +   * Specifies the schema by using the input DDL-formatted string. Some 
data sources (e.g. JSON) can
    +   * infer the input schema automatically from data. By specifying the 
schema here, the underlying
    +   * data source can skip the schema inference step, and thus speed up 
data loading.
    +   *
    +   * @since 2.3.0
    +   */
    +  def schema(schemaString: String): DataFrameReader = {
    +    this.userSpecifiedSchema = Option(StructType.fromDDL(schemaString))
    --- End diff --
    
    Sorry, but I probably missed your point. What's the API consistency you 
pointed out here?
    I just made the python APIs the same with the Scala ones like:
    ```
    --- python
    >>> from pyspark.sql.types import *
    >>> fields = [StructField('a', IntegerType(), True), StructField('b', 
StringType(), True), StructField('c', DoubleType(), True)]
    >>> schema = StructType(fields)
    >>> spark.read.schema(schema).csv("/Users/maropu/Desktop/test.csv").show()
    +---+----+---+
    |  a|   b|  c|
    +---+----+---+
    |  1| aaa|0.3|
    +---+----+---+
    
    >>> spark.read.schema("a INT, b STRING, c 
DOUBLE").csv("/Users/maropu/Desktop/test.csv").show()
    +---+----+---+
    |  a|   b|  c|
    +---+----+---+
    |  1| aaa|0.3|
    +---+----+---+
    
    --- scala
    scala> import org.apache.spark.sql.types._
    scala> fields = StructField("a", IntegerType) :: StructField("b", 
StringType) :: StructField("c", DoubleType) :: Nil
    scala> val schema = StructType(fields)
    scala> spark.read.schema(schema).csv("/Users/maropu/Desktop/test.csv").show
    +---+----+---+
    |  a|   b|  c|
    +---+----+---+
    |  1| aaa|0.3|
    +---+----+---+
    
    scala> spark.read.schema("a INT, b STRING, c 
DOUBLE").csv("/Users/maropu/Desktop/test.csv").show
    +---+----+---+
    |  a|   b|  c|
    +---+----+---+
    |  1| aaa|0.3|
    +---+----+---+
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to