[ 
https://issues.apache.org/jira/browse/SPARK-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152718#comment-15152718
 ] 

Julien Baley commented on SPARK-13046:
--------------------------------------

Sorry it took me so long to come back to you.

We're using Hive (and Java), and I'm calling the 
`hiveContext.createExternalTable("table_name", "s3://bucket/some_path/", 
"parquet");`, i.e. I believe I'm passing the correct path and then Spark 
perhaps infers something wrongly in the middle?

I don't think I have a way to set basePath from there? [~yhuai], do you mean 
calling `sqlContext.read.option(key, value)`? Is there a way I can access the 
SQLContext from my HiveContext?

> Partitioning looks broken in 1.6
> --------------------------------
>
>                 Key: SPARK-13046
>                 URL: https://issues.apache.org/jira/browse/SPARK-13046
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Julien Baley
>
> Hello,
> I have a list of files in s3:
> {code}
> s3://bucket/some_path/date_received=2016-01-13/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some
>  parquet files}
> s3://bucket/some_path/date_received=2016-01-14/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some
>  parquet files}
> s3://bucket/some_path/date_received=2016-01-15/fingerprint=2f6a09d370b4021d/{_SUCCESS,metadata,some
>  parquet files}
> {code}
> Until 1.5.2, it all worked well and passing s3://bucket/some_path/ (the same 
> for the three lines) would correctly identify 2 pairs of key/value, one 
> `date_received` and one `fingerprint`.
> From 1.6.0, I get the following exception:
> {code}
> assertion failed: Conflicting directory structures detected. Suspicious paths
> s3://bucket/some_path/date_received=2016-01-13
> s3://bucket/some_path/date_received=2016-01-14
> s3://bucket/some_path/date_received=2016-01-15
> {code}
> That is to say, the partitioning code now fails to identify 
> date_received=2016-01-13 as a key/value pair.
> I can see that there has been some activity on 
> spark/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
>  recently, so that seems related (especially the commits 
> https://github.com/apache/spark/commit/7b5d9051cf91c099458d092a6705545899134b3b
>   and 
> https://github.com/apache/spark/commit/de289bf279e14e47859b5fbcd70e97b9d0759f14
>  ).
> If I read correctly the tests added in those commits:
> -they don't seem to actually test the return value, only that it doesn't crash
> -they only test cases where the s3 path contain 1 key/value pair (which 
> otherwise would catch the bug)
> This is problematic for us as we're trying to migrate all of our spark 
> services to 1.6.0 and this bug is a real blocker. I know it's possible to 
> force a 'union', but I'd rather not do that if the bug can be fixed.
> Any question, please shoot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to