Thanks for coming back with the solution!

Sorry my suggestion did not help


Daniel

On Wed, 20 Jun 2018, 21:46 mattl156, <matt.l...@gmail.com> wrote:

> Alright so I figured it out.
>
> When reading from and writing to Hive metastore Parquet tables, Spark SQL
> will try to use its own Parquet support instead of Hive SerDe for better
> performance.
>
> And so setting things like the below have no impact.
>
> sqlContext.setConf("mapred.input.dir.recursive","true");
> sqlContext.setConf("spark.sql.parquet.binaryAsString", "true")
>
> The Default Parquet Support also does not read recursive directories.
>
> The using of spark default serde vs hive serde is controlled by:
> spark.sql.hive.convertMetastoreParquet configuration, and is turned on by
> default
>
> After setting this to false "
> sqlContext.setConf("spark.sql.hive.convertMetastoreParquet", "false") "
>
> I can set the other 2 properties above and query my parquet table with sub
> directories.
>
> -Matt
>
>
>
>
>
> --
> Sent from:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dspark-2Duser-2Dlist.1001560.n3.nabble.com_&d=DwICAg&c=T4LuzJg_R6QwRnqJoo4xTCUXoKbdWTdhZj7r4OYEklY&r=jiwJwiY6eWNA-UciMYI1Iw&m=vDKanQpl6_Oxjiexu1ybjELUTvIsAHVt69RaZ9GKFx4&s=g6UtmolMtdTJCjOWQgptVrav_W7Ona0LD3sHAWrVsw4&e=
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to