Yeah, we don't currently push down predicates into the metastore.  Though,
we do prune partitions based on predicates (so we don't read the data).

On Mon, Apr 13, 2015 at 2:53 PM, Tom Graves <tgraves...@yahoo.com.invalid>
wrote:

> Hey,
> I was trying out spark sql using the HiveContext and doing a select on a
> partitioned table with lots of partitions (16,000+). It took over 6 minutes
> before it even started the job. It looks like it was querying the Hive
> metastore and got a good chunk of data back.  Which I'm guessing is info on
> the partitions.  Running the same query using hive takes 45 seconds for the
> entire job.
> I know spark sql doesn't support all the hive optimization.  Is this a
> known limitation currently?
> Thanks,Tom

Reply via email to