[
https://issues.apache.org/jira/browse/PARQUET-647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15355548#comment-15355548
]
Mahadevan Sudarsanan commented on PARQUET-647:
----------------------------------------------
I also tried adding Parquet-hive bundle to check if there is any parquet
version mismatch, still it produced Null Pointer Exception.
> Null Pointer Exception in Hive upon reading Parquet
> ---------------------------------------------------
>
> Key: PARQUET-647
> URL: https://issues.apache.org/jira/browse/PARQUET-647
> Project: Parquet
> Issue Type: Bug
> Components: parquet-format, parquet-mr
> Affects Versions: 1.6.0
> Environment: Hadoop 2.6
> Hive 0.14
> Parquet 1.6
> SPARK 1.6.1
> Scala 2.11
> Reporter: Mahadevan Sudarsanan
> Priority: Blocker
> Labels: hadoop, hive, nullpointerexception, parquet, spark
> Attachments: Screen Shot 2016-06-24 at 11.01.46 AM.png, Screen Shot
> 2016-06-24 at 11.01.55 AM.png, Screen Shot 2016-06-24 at 11.02.50 AM.png,
> Screen Shot 2016-06-24 at 11.03.56 AM.png
>
>
> When I write Parquet files from Spark Job, and try to read it in Hive as an
> External Table , I get Null Pointer Exception. After further analysis , I
> found I had some Null values in my transformation(used Dataset and DataFrame
> API's) before saving to parquet. These 2 fields which contains NULL are float
> data types. When I removed these two columns from the parquet datasets, I was
> able to read it in hive. Contrastingly , with all NULL columns I was able to
> read it Hive when I write my job to ORC format.
> When a datatype is anything other than String , which is completely
> empty(NULL) written in parquet is not been able to read by Hive and throws NP
> Exception.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)