[ https://issues.apache.org/jira/browse/PARQUET-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ryan Blue resolved PARQUET-317. ------------------------------- Resolution: Fixed Fix Version/s: 1.8.0 Merged #228. Thanks for fixing this, Steven! > writeMetaDataFile crashes when a relative root Path is used > ----------------------------------------------------------- > > Key: PARQUET-317 > URL: https://issues.apache.org/jira/browse/PARQUET-317 > Project: Parquet > Issue Type: Bug > Components: parquet-mr > Affects Versions: 1.8.0 > Reporter: Steven She > Assignee: Steven She > Priority: Minor > Fix For: 1.8.0 > > > In Spark, I can save an RDD to the local file system using a relative path, > e.g.: > {noformat} > rdd.saveAsNewAPIHadoopFile( > "relativeRoot", > classOf[Void], > tag.runtimeClass.asInstanceOf[Class[T]], > classOf[ParquetOutputFormat[T]], > job.getConfiguration) > {noformat} > This leads to a crash in the ParquetFileWriter.mergeFooters(..) method since > the footer paths are read as fully qualified paths, but the root path is > provided as a relative path: > {noformat} > org.apache.parquet.io.ParquetEncodingException: > /Users/stevenshe/schema/relativeRoot/part-r-00000.snappy.parquet invalid: all > the files must be contained in the root relativeRoot > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)