[ https://issues.apache.org/jira/browse/SPARK-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15002256#comment-15002256 ]
Ted Yu commented on SPARK-11661: -------------------------------- Looks like org.apache.spark.streaming.rdd.TrackStateRDDSuite started to fail since this went in: https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/4084/ > We should still pushdown filters returned by a data source's unhandledFilters > ----------------------------------------------------------------------------- > > Key: SPARK-11661 > URL: https://issues.apache.org/jira/browse/SPARK-11661 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Yin Huai > Assignee: Yin Huai > Priority: Blocker > Fix For: 1.6.0, 1.7.0 > > > We added unhandledFilters interface to SPARK-10978. So, a data source has a > chance to let Spark SQL know that for those returned filters, it is possible > that the data source will not apply them to every row. So, Spark SQL should > use a Filter operator to evaluate those filters. However, if a filter is a > part of returned unhandledFilters, we should still push it down. For example, > our internal data sources do not override this method, if we do not push down > those filters, we are actually turning off the filter pushdown feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org