[ https://issues.apache.org/jira/browse/SPARK-20530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15989619#comment-15989619 ]
James Maki commented on SPARK-20530: ------------------------------------ As with SPARK-17100 calling {{cache()}} allows this filter to work, but it doesn't seem like that should be necessary. Additionally, some other operations (like calling {{distinct()}} after the filter can also allow this flow to work. > "Cannot evaluate expression" when filtering on parquet partition column > ----------------------------------------------------------------------- > > Key: SPARK-20530 > URL: https://issues.apache.org/jira/browse/SPARK-20530 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.1.0 > Environment: spark-2.1.0-bin-hadoop2.7.tgz Python2 > Reporter: James Maki > > In pyspark, when filtering on a parquet partition column, the following error > occurs: > {code} > py4j.protocol.Py4JJavaError: An error occurred while calling o54.toString. > : java.lang.UnsupportedOperationException: Cannot evaluate expression: > <lambda>(input[0, int, true]) > {code} > Reproduce via the following script: > {code} > from pyspark.sql import SparkSession > from pyspark.sql.functions import udf > from pyspark.sql.types import BooleanType > if __name__ == '__main__': > sql = SparkSession.builder.getOrCreate() > data = [(0, 1), (0, 2), (0, 3), (1, 4), (1, 5), (1, 6)] > sql.createDataFrame(data, ['key', 'value'])\ > .write\ > .partitionBy('key')\ > .format('parquet')\ > .save('dest.parquet', mode='overwrite') > sql.read.parquet('dest.parquet')\ > .filter(udf(lambda x: True, BooleanType())('key'))\ > .explain(extended=True) > {code} > Full script output > {code} > Using Spark's default log4j profile: > org/apache/spark/log4j-defaults.properties > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > 17/04/28 19:45:41 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > SLF4J: Defaulting to no-operation (NOP) logger implementation > SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further > details. > Traceback (most recent call last): > File "udf_filter_partition_bug.py", line 15, in <module> > .explain(extended=True) > File > "C:\build\env\python-2.7\lib\site-packages\pyspark-2.1.0-py2.7.egg\pyspark\sql\dataframe.py", > line 266, in explain > print(self._jdf.queryExecution().toString()) > File > "C:\build\env\python-2.7\lib\site-packages\py4j-0.10.4-py2.7.egg\py4j\java_gateway.py", > line 1133, in __call__ > answer, self.gateway_client, self.target_id, self.name) > File > "C:\build\env\python-2.7\lib\site-packages\pyspark-2.1.0-py2.7.egg\pyspark\sql\utils.py", > line 63, in deco > return f(*a, **kw) > File > "C:\build\env\python-2.7\lib\site-packages\py4j-0.10.4-py2.7.egg\py4j\protocol.py", > line 319, in get_return_value > format(target_id, ".", name), value) > py4j.protocol.Py4JJavaError: An error occurred while calling o54.toString. > : java.lang.UnsupportedOperationException: Cannot evaluate expression: > <lambda>(input[0, int, true]) > at > org.apache.spark.sql.catalyst.expressions.Unevaluable$class.eval(Expression.scala:221) > at > org.apache.spark.sql.execution.python.PythonUDF.eval(PythonUDF.scala:27) > at > org.apache.spark.sql.catalyst.expressions.InterpretedPredicate$$anonfun$create$1.apply(predicates.scala:34) > at > org.apache.spark.sql.catalyst.expressions.InterpretedPredicate$$anonfun$create$1.apply(predicates.scala:34) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex$$anonfun$9.apply(PartitioningAwareFileIndex.scala:174) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex$$anonfun$9.apply(PartitioningAwareFileIndex.scala:173) > at > scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at > scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247) > at > scala.collection.TraversableLike$class.filter(TraversableLike.scala:259) > at scala.collection.AbstractTraversable.filter(Traversable.scala:104) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.prunePartitions(PartitioningAwareFileIndex.scala:173) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listFiles(PartitioningAwareFileIndex.scala:66) > at > org.apache.spark.sql.execution.FileSourceScanExec.org$apache$spark$sql$execution$FileSourceScanExec$$selectedPartitions$lzycompute(DataSourceScanExec.scala:159) > at > org.apache.spark.sql.execution.FileSourceScanExec.org$apache$spark$sql$execution$FileSourceScanExec$$selectedPartitions(DataSourceScanExec.scala:159) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anonfun$17.apply(DataSourceScanExec.scala:244) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anonfun$17.apply(DataSourceScanExec.scala:243) > at scala.Option.map(Option.scala:146) > at > org.apache.spark.sql.execution.FileSourceScanExec.<init>(DataSourceScanExec.scala:243) > at > org.apache.spark.sql.execution.datasources.FileSourceStrategy$.apply(FileSourceStrategy.scala:109) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62) > at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) > at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) > at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) > at > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) > at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66) > at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) > at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) > at > org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92) > at > org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:79) > at > org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:75) > at > org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:84) > at > org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:84) > at > org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:232) > at > org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:232) > at > org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:107) > at > org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:232) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) > at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) > at py4j.Gateway.invoke(Gateway.java:280) > at > py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) > at py4j.commands.CallCommand.execute(CallCommand.java:79) > at py4j.GatewayConnection.run(GatewayConnection.java:214) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org