saulius opened a new issue, #17602:
URL: https://github.com/apache/druid/issues/17602

   The docs at 
https://druid.apache.org/docs/latest/development/extensions-core/bloom-filter/ 
say the following:
   
   > Bloom filters can be computed in SQL expressions with the bloom_filter 
aggregator:
   > SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo 
WHERE dim2 = 'abc'
   > but requires the setting druid.sql.planner.serializeComplexValues to be 
set to true. Bloom filter results in a SQL response are serialized into a 
base64 string, which can then be used in subsequent queries as a filter.
   
   I'm trying to do exactly that using the default `kttm_rollup` dataset, e.g.:
   
   ```
   WITH yunowork AS (
    SELECT 
       BLOOM_FILTER("ip_address", 10000) AS bf
     FROM druid.kttm_rollup
   )
   SELECT
     ip_address
   FROM druid.kttm_rollup
   JOIN yunowork Y ON 1 = 1
   WHERE BLOOM_FILTER_TEST(ip_address, Y.bf)
   ```
   
   I get the following error:
   
   ```
   Error: INVALID_INPUT
   
   Cannot apply 'BLOOM_FILTER_TEST' to arguments of type 
'BLOOM_FILTER_TEST(<VARCHAR>, <COMPLEX<BLOOM>>)'. Supported form(s): 
'BLOOM_FILTER_TEST(<ANY>, <CHARACTER>)' (line [10], column [7])
   ```
   
   `druid.sql.planner.serializeComplexValues` is set to true, but as far as I 
understand [it's irrelevant](https://github.com/apache/druid/pull/17549).
   
   Am I doing something wrong?
   
   ### Affected Version
   
   31.0.1 (locally on osx) and 26.0.0 (cluster on linux), I assume all versions 
in between too.
   
   ### Description
   
   Mostly described above.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to