davidradl commented on PR #79:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/79#issuecomment-1816718098

   @snuyanzin I have this test working locally. Actually the plan test would 
not have shown this error - as it does not have the filter in it.  The way the 
pushdown predicate was implemented in JDBC was that the applyFilter is driven 
and the filters are then stored in java fields. This fix then takes those 
stored fields and adds them to the select statement. This approach was taken 
for the scan join as well. 
   
   The optimized plan  for my test case is 
   ```
   Calc(select=[ip, PROCTIME_MATERIALIZE(proctime) AS proctime, ip0, type, age])
   +- LookupJoin(table=[default_catalog.default_database.c], 
joinType=[LeftOuterJoin], lookup=[ip=ip], select=[ip, proctime, ip, CAST(0 AS 
INTEGER) AS type, age, CAST(ip AS VARCHAR(2147483647)) AS ip0])
      +- Calc(select=[ip, PROCTIME() AS proctime])
         +- TableSourceScan(table=[[default_catalog, default_database, a]], 
fields=[ip])
   ```
   
   with no join condition.  
   
   My thinking is that this fix is pragmatic, and gets things working based on 
the way predicate pushdown was done. A suspect a better change would be to look 
at this area again so that these join conditions are handled in the calcite 
graph. IMHO I think that the JDBC driven should just be mapping types and 
providing quote characters, it should not be manipulating filters into join 
conditions for predicate pushdown.   
   
   I will include the test case - as I have it working.  
   
   WDYT?
   
   
   
   
   
   
   
   We discussed this earlier in the issue.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to