[
https://issues.apache.org/jira/browse/SPARK-26631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17114013#comment-17114013
]
Jiri Humpolicek commented on SPARK-26631:
-----------------------------------------
Currently I tried same thing with same result, it does not work for me to read
parquet from har file. Moreover I can't read even json file from har. My aim
was to minimize number of files on hdfs using har file.
> Issue while reading Parquet data from Hadoop Archive files (.har)
> -----------------------------------------------------------------
>
> Key: SPARK-26631
> URL: https://issues.apache.org/jira/browse/SPARK-26631
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: Sathish
> Priority: Minor
>
> While reading Parquet file from Hadoop Archive file Spark is failing with
> below exception
>
> {code:java}
> scala> val hardf =
> sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet")
> org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet.
> It must be specified manually.; at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
> at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$9.apply(DataSource.scala:208)
> at scala.Option.getOrElse(Option.scala:121) at
> org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
> at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:393)
> at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227) at
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622) at
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606) ...
> 49 elided
> {code}
>
> Whereas the same parquet file can be read normally without any issues
> {code:java}
> scala> val df =
> sqlContext.read.parquet("hdfs:///tmp/testparquet/userdata1.parquet")
> df: org.apache.spark.sql.DataFrame = [registration_dttm: timestamp, id: int
> ... 11 more fields]{code}
>
> +Here are the steps to reproduce the issue+
>
> a) hadoop fs -mkdir /tmp/testparquet
> b) Get sample parquet data and rename the file to userdata1.parquet
> wget
> [https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet?raw=true]
> c) hadoop fs -put userdata.parquet /tmp/testparquet
> d) hadoop archive -archiveName testarchive.har -p /tmp/testparquet /tmp
> e) We should be able to see the file under har file
> hadoop fs -ls har:///tmp/testarchive.har
> f) Launch spark2 / spark shell
> g)
> {code:java}
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
> val df =
> sqlContext.read.parquet("har:///tmp/testarchive.har/userdata1.parquet"){code}
> is there anything which I am missing here.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]