[ 
https://issues.apache.org/jira/browse/SPARK-20457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Gupta updated SPARK-20457:
-----------------------------------
    Description: 
I have a CSV file, test.csv:

{code:xml}
col
1
2
3
4
{code}

When I read it using Spark, it gets the schema of data correct:

{code:java}
val df = spark.read.option("header", "true").option("inferSchema", 
"true").csv("test.csv")
    
df.printSchema
root
|-- col: integer (nullable = true)
{code}

But when I override the `schema` of CSV file and make `inferSchema` false, then 
SparkSession is picking up custom schema partially.

{code:java}
val df = spark.read.option("header", "true").option("inferSchema", 
"false").schema(StructType(List(StructField("custom", StringType, 
false)))).csv("test.csv")

df.printSchema
root
|-- custom: string (nullable = true)
{code}

I mean only column name (`custom`) and DataType (`StringType`) are getting 
picked up. But, `nullable` part is being ignored, as it is still coming 
`nullable = true`, which is incorrect.

I am not able to understand this behavior.

  was:
I have a CSV file, test.csv:

{code:csv}
    col
    1
    2
    3
    4
{code}

When I read it using Spark, it gets the schema of data correct:

{code:java}
    val df = spark.read.option("header", "true").option("inferSchema", 
"true").csv("test.csv")
    
    df.printSchema
    root
     |-- col: integer (nullable = true)
{code}

But when I override the `schema` of CSV file and make `inferSchema` false, then 
SparkSession is picking up custom schema partially.

{code:java}
    val df = spark.read.option("header", "true").option("inferSchema", 
"false").schema(StructType(List(StructField("custom", StringType, 
false)))).csv("test.csv")

    df.printSchema
    root
     |-- custom: string (nullable = true)
{code}

I mean only column name (`custom`) and DataType (`StringType`) are getting 
picked up. But, `nullable` part is being ignored, as it is still coming 
`nullable = true`, which is incorrect.

I am not able to understand this behavior.


> Spark CSV is not able to Override Schema while reading data
> -----------------------------------------------------------
>
>                 Key: SPARK-20457
>                 URL: https://issues.apache.org/jira/browse/SPARK-20457
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Himanshu Gupta
>
> I have a CSV file, test.csv:
> {code:xml}
> col
> 1
> 2
> 3
> 4
> {code}
> When I read it using Spark, it gets the schema of data correct:
> {code:java}
> val df = spark.read.option("header", "true").option("inferSchema", 
> "true").csv("test.csv")
>     
> df.printSchema
> root
> |-- col: integer (nullable = true)
> {code}
> But when I override the `schema` of CSV file and make `inferSchema` false, 
> then SparkSession is picking up custom schema partially.
> {code:java}
> val df = spark.read.option("header", "true").option("inferSchema", 
> "false").schema(StructType(List(StructField("custom", StringType, 
> false)))).csv("test.csv")
> df.printSchema
> root
> |-- custom: string (nullable = true)
> {code}
> I mean only column name (`custom`) and DataType (`StringType`) are getting 
> picked up. But, `nullable` part is being ignored, as it is still coming 
> `nullable = true`, which is incorrect.
> I am not able to understand this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to