Suhas Nalapure created SPARK-18004: -------------------------------------- Summary: DataFrame filter Predicate push-down fails for Oracle Timestamp type columns Key: SPARK-18004 URL: https://issues.apache.org/jira/browse/SPARK-18004 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 1.6.0 Reporter: Suhas Nalapure Priority: Critical
DataFrame filter Predicate push-down fails for Oracle Timestamp type columns with Exception java.sql.SQLDataException: ORA-01861: literal does not match format string: Java source code (this code works fine for mysql & mssql databases) : --------------------------------------------------------------------------------------- //DataFrame df = create a DataFrame over an Oracle table df = df.filter(df.col("TS").lt(new java.sql.Timestamp(System.currentTimeMillis()))); df.explain(); df.show(); Log statements with the Exception: -------------------------------------------- Schema: root |-- ID: string (nullable = false) |-- TS: timestamp (nullable = true) |-- DEVICE_ID: string (nullable = true) |-- REPLACEMENT: string (nullable = true) == Physical Plan == Filter (TS#1 < 1476861841934000) +- Scan JDBCRelation(jdbc:oracle:thin:@10.0.0.111:1521:orcl,ORATABLE,[Lorg.apache.spark.Partition;@78c74647,{user=user, password=pwd, url=jdbc:oracle:thin:@10.0.0.111:1521:orcl, dbtable=ORATABLE, driver=oracle.jdbc.driver.OracleDriver})[ID#0,TS#1,DEVICE_ID#2,REPLACEMENT#3] PushedFilters: [LessThan(TS,2016-10-19 12:54:01.934)] 2016-10-19 12:54:04,268 ERROR [Executor task launch worker-0] org.apache.spark.executor.Executor Exception in task 0.0 in stage 0.0 (TID 0) java.sql.SQLDataException: ORA-01861: literal does not match format string at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:461) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:402) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1065) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:681) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:256) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:577) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:239) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:75) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:1043) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1111) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1353) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4485) at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:4566) at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:5251) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:383) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:359) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org