LakshSingla commented on a change in pull request #12195:
URL: https://github.com/apache/druid/pull/12195#discussion_r791415699



##########
File path: docs/configuration/index.md
##########
@@ -1846,6 +1846,7 @@ The Druid SQL server is configured through the following 
properties on the Broke
 |`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator 
for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` 
is set to true. Poll period is in milliseconds. |60000|
 |`druid.sql.planner.authorizeSystemTablesDirectly`|If true, Druid authorizes 
queries against any of the system schema tables (`sys` in SQL) as 
`SYSTEM_TABLE` resources which require `READ` access, in addition to 
permissions based content filtering.|false|
 |`druid.sql.planner.useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will 
return the explain plan as a JSON representation of equivalent native query(s), 
else it will return the original version of explain plan generated by Calcite. 
It can be overridden per query with `useNativeQueryExplain` context key.|false|
+|`druid.sql.planner.maxNumericInFilters`|If set to a value between 1 and 
10,000, Druid will allow numeric values specified for IN part of the query not 
to exceed this user defined values and queries with more than 10,000 numeric IN 
filters will not be run by Druid unless they are cast to String.|`10000`|

Review comment:
       Would be good if we specify that it could be overridden in the context 
using the `maxNumericInFilters` key. 

##########
File path: docs/querying/query-context.md
##########
@@ -62,7 +62,7 @@ Unless otherwise noted, the following parameters apply to all 
query types.
 |secondaryPartitionPruning|`true`|Enable secondary partition pruning on the 
Broker. The Broker will always prune unnecessary segments from the input scan 
based on a filter on time intervals, but if the data is further partitioned 
with hash or range partitioning, this option will enable additional pruning 
based on a filter on secondary partition dimensions.|
 |enableJoinLeftTableScanDirect|`false`|This flag applies to queries which have 
joins. For joins, where left child is a simple scan with a filter,  by default, 
druid will run the scan as a query and the join the results to the right child 
on broker. Setting this flag to true overrides that behavior and druid will 
attempt to push the join to data servers instead. Please note that the flag 
could be applicable to queries even if there is no explicit join. since queries 
can internally translated into a join by the SQL planner.|
 |debug| `false` | Flag indicating whether to enable debugging outputs for the 
query. When set to false, no additional logs will be produced (logs produced 
will be entirely dependent on your logging level). When set to true, the 
following addition logs will be produced:<br />- Log the stack trace of the 
exception (if any) produced by the query |
-
+|maxNumericInFilters|`10000`|If set to a value between 1 and 10,000, Druid 
will allow numeric values specified for IN part of the query not to exceed this 
user defined values and queries with more than 10,000 numeric IN filters will 
not be run by Druid unless they are cast to String.

Review comment:
       nit: Wording seems a bit off. It is implying (to me) that when the value 
is set between 1 - 10000 then the behavior would be as specified in the doc. 
But doesn't mention what would happen if set above 10000. Something like below 
seems a bit more clearer and concise. Also, in general, I am wary of using the 
default value in the description of the doc.
   ```
   |maxNumericInFilters|`10000`| The Maximum number of IN clauses for numeric 
values that are allowed in a query. To run queries with a greater number of IN 
clauses, consider casting it to string.
   ```
   Please correct me if I am understanding the flag incorrectly. Also, WDYT? 

##########
File path: sql/src/test/java/org/apache/druid/sql/calcite/util/CalciteTests.java
##########
@@ -300,7 +300,7 @@ public AuthenticationResult 
createEscalatedAuthenticationResult()
           new TimestampSpec(TIMESTAMP_COLUMN, "iso", null),
           new DimensionsSpec(
               ImmutableList.<DimensionSchema>builder()
-                  
.addAll(DimensionsSpec.getDefaultSchemas(ImmutableList.of("dim1", "dim2", 
"dim3", "dim4", "dim5")))
+                  
.addAll(DimensionsSpec.getDefaultSchemas(ImmutableList.of("dim1", "dim2", 
"dim3", "dim4", "dim5", "dim6")))

Review comment:
       Please confirm once if other tests are not failing on changing the row 
shape. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to