venkatramanp opened a new issue #9532: Lookup without tsColumn in table fails 
ERROR: function max() does not exist
URL: https://github.com/apache/druid/issues/9532
 
 
   Please provide a detailed title (e.g. "Broker crashes when using TopN query 
with Bound filter" instead of just "Broker crashes").
   
   ### Affected Version
   
   0.17
   
   ### Description
   
   Created a lookup based on a Look up table in Postgres, without the 
timeColumn. This resulted in the following error in co-ordinator log:
   
   2020-03-17T06:33:18,631 ERROR [NamespaceExtractionCacheManager-1] 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler - Failed to 
update namespace 
[JdbcExtractionNamespace{connectorConfig=DbConnectorConfig{createTables=true, 
connectURI='jdbc:postgresql://tgsaas.c14i2nwkuu84.us-east-1.rds.amazonaws.com:5432/tgsaas_dash',
 user='tgview', 
passwordProvider=org.apache.druid.metadata.DefaultPasswordProvider, 
dbcpProperties=null}, table='mv_organisations_metadata', 
keyColumn='organisation_type', valueColumn='product', tsColumn='', 
filter='null', pollPeriod=PT0S}] : 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl@4e6a5c56
   org.skife.jdbi.v2.exceptions.CallbackFailedException: 
org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: 
org.postgresql.util.PSQLException: ERROR: function max() does not exist
     Hint: No function matches the given name and argument types. You might 
need to add explicit type casts.
     Position: 8 [statement:"SELECT MAX() FROM mv_organisations_metadata", 
located:"SELECT MAX() FROM mv_organisations_metadata", rewritten:"SELECT MAX() 
FROM mv_organisations_metadata", arguments:{ positional:{}, named:{}, 
finder:[]}]
           at org.skife.jdbi.v2.DBI.withHandle(DBI.java:284) 
~[jdbi-2.63.1.jar:2.63.1]
           at 
org.apache.druid.server.lookup.namespace.JdbcCacheGenerator.lastUpdates(JdbcCacheGenerator.java:180)
 ~[druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.JdbcCacheGenerator.generateCache(JdbcCacheGenerator.java:66)
 ~[druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.JdbcCacheGenerator.generateCache(JdbcCacheGenerator.java:45)
 ~[druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl.tryUpdateCache(CacheScheduler.java:229)
 [druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl.updateCache(CacheScheduler.java:208)
 [druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl.access$600(CacheScheduler.java:144)
 [druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl$2.run(CacheScheduler.java:190)
 [druid-lookups-cached-global-0.17.0-iap3.jar:0.17.0-iap3]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_242]
           at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_242]
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_242]
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_242]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_242]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_242]
           at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
   Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: 
org.postgresql.util.PSQLException: ERROR: function max() does not exist
     Hint: No function matches the given name and argument types. You might 
need to add explicit type casts.
     Position: 8 [statement:"SELECT MAX() FROM mv_organisations_metadata", 
located:"SELECT MAX() FROM mv_organisations_metadata", rewritten:"SELECT MAX() 
FROM mv_organisations_metadata", arguments:{ positional:{}, named:{}, 
finder:[]}]
           at 
org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1334) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.fold(Query.java:173) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.first(Query.java:273) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.first(Query.java:264) 
~[jdbi-2.63.1.jar:2.63.1]
           at 
org.apache.druid.server.lookup.namespace.JdbcCacheGenerator.lambda$lastUpdates$2(JdbcCacheGenerator.java:189)
 ~[?:?]
           at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) 
~[jdbi-2.63.1.jar:2.63.1]
                   ... 14 more
   Caused by: org.postgresql.util.PSQLException: ERROR: function max() does not 
exist
     Hint: No function matches the given name and argument types. You might 
need to add explicit type casts.
     Position: 8
           at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
 ~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
 ~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200) 
~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424) 
~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at 
org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:161)
 ~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at 
org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:155) 
~[postgresql-9.4.1208.jre7.jar:9.4.1208.jre7]
           at 
org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1328) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.fold(Query.java:173) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.first(Query.java:273) 
~[jdbi-2.63.1.jar:2.63.1]
           at org.skife.jdbi.v2.Query.first(Query.java:264) 
~[jdbi-2.63.1.jar:2.63.1]
           at 
org.apache.druid.server.lookup.namespace.JdbcCacheGenerator.lambda$lastUpdates$2(JdbcCacheGenerator.java:189)
 ~[?:?]
           at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) 
~[jdbi-2.63.1.jar:2.63.1]
           ... 14 more
   
   To Reproduce, 
   1. Setup a wikipedia tutorial in the cluster
   2. Create the following table in Postgres and insert sample lookup values:
   CREATE TABLE lookupTablePg (
     keyColumn varchar(255) NOT NULL,
     valueColumn varchar(255) NOT NULL,
   )
   insert into lookupTablePg values ('#es.wikipedia", 'Spanish');
   insert into lookupTablePg values ('#en.wikipedia", 'English');
   3. Copy Postgres JDBC to dist/druid/lib
   4. Create the following lookup:
   {
     "type": "cachedNamespace",
     "extractionNamespace": {
       "type": "jdbc",
       "connectorConfig": {
         "createTables": true,
         "connectURI": "jdbc:postgresql://PostgresDb-fqdn:5432/druid",
         "user": "druid",
         "password": "druid"
       },
       "table": "lookupTablePg",
       "keyColumn": "keyColumn",
       "valueColumn": "valueColumn",
       "tsColumn": ""
     },
     "firstCacheTimeout": 120000,
     "injective": true
   }
   5. run a query from Druid console:
   SELECT "channel", LOOKUP ("channel", 'channelLookupPg') as "Language"
   FROM wikipedia
   where "channel" = '#es.wikipedia' 
   
   When the timeColumn is missing,  a function max() is passed to Postgres 
without any column name and hence it is failing. Same would fail with MySQL too.
   
   Documentation says tsColumn is not required -> 
https://druid.apache.org/docs/latest/development/extensions-core/lookups-cached-global.html#jdbc-lookup
   
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to