GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/14217

    [SPARK-16562][SQL] Do not allow downcast in INT32 based types for normal 
Parquet reader

    ## What changes were proposed in this pull request?
    
    Currently, INT32 based types, (`ShortType`, `ByteType`, `IntegerType`) can 
be downcasted in any combination. For example, the codes below:
    
    ```scala
    val path = "/tmp/test.parquet"
    val data = (1 to 4).map(Tuple1(_.toInt))
    data.toDF("a").write.parquet(path)
    val schema = StructType(StructField("a", ShortType, true) :: Nil)
    spark.read.schema(schema).parquet(path).show()
    ```
    
    works fine. This should not be allowed.
    
    This only happens when vectorized reader is disabled.
    
    ## How was this patch tested?
    
    Unit test in `ParquetIOSuite`.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark SPARK-16562

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/14217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #14217
    
----
commit 97303c97e990c12abebf309fe3ab9dd0fc31e515
Author: hyukjinkwon <[email protected]>
Date:   2016-07-15T04:51:44Z

    Do not allow downcast in INT32 based types for non-vectorized Parquet reader

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to