yashmayya commented on code in PR #14830:
URL: https://github.com/apache/pinot/pull/14830#discussion_r1926386163
##########
pinot-query-planner/src/main/java/org/apache/pinot/query/catalog/PinotCatalog.java:
##########
@@ -80,8 +82,20 @@ public Table getTable(String name) {
*/
@Override
public Set<String> getTableNames() {
- return _tableCache.getTableNameMap().keySet().stream().filter(n ->
DatabaseUtils.isPartOfDatabase(n, _databaseName))
- .collect(Collectors.toSet());
+ if (StringUtils.isBlank(_databaseName)) {
+ return _tableCache.getTableNameMap().keySet().stream()
+ .filter(n -> DatabaseUtils.isPartOfDatabase(n, _databaseName))
+ .collect(Collectors.toSet());
+ } else {
+ Set<String> result = new HashSet<>();
+ for (String n: _tableCache.getTableNameMap().keySet()) {
Review Comment:
nit: let's use a more descriptive name like `tableName` or
`fullyQualifiedTableName` instead of `n`?
##########
pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/MultiStageEngineIntegrationTest.java:
##########
@@ -1347,6 +1347,15 @@ public void testConcurrentQueries() {
executorService.shutdownNow();
}
+ @Test
+ public void testCaseInsensitiveNames() throws Exception {
+ String query = "select ACTualELAPsedTIMe from mYtABLE where
actUALelAPSedTIMe > 0 limit 1";
+ JsonNode jsonNode = postQuery(query);
Review Comment:
Nah, most other tests should have no behavioral differences between the two
and the controller API basically just proxies the request to the broker API.
But in this case since there were some changes to the controller API too, I
figured this additional test would be nice to have. Thanks for adding it!
##########
pinot-query-planner/src/main/java/org/apache/pinot/query/catalog/PinotCatalog.java:
##########
@@ -80,8 +82,20 @@ public Table getTable(String name) {
*/
@Override
public Set<String> getTableNames() {
- return _tableCache.getTableNameMap().keySet().stream().filter(n ->
DatabaseUtils.isPartOfDatabase(n, _databaseName))
- .collect(Collectors.toSet());
+ if (StringUtils.isBlank(_databaseName)) {
+ return _tableCache.getTableNameMap().keySet().stream()
+ .filter(n -> DatabaseUtils.isPartOfDatabase(n, _databaseName))
Review Comment:
This method assumes that tables in the default database will not be
explicitly prefixed - is that always true for our table cache here?
##########
pinot-query-planner/src/main/java/org/apache/pinot/query/QueryEnvironment.java:
##########
@@ -118,9 +111,15 @@ public QueryEnvironment(Config config) {
String database = config.getDatabase();
PinotCatalog catalog = new PinotCatalog(config.getTableCache(), database);
CalciteSchema rootSchema = CalciteSchema.createRootSchema(false, false,
database, catalog);
+ Properties connectionConfigProperties = new Properties();
+
connectionConfigProperties.setProperty(CalciteConnectionProperty.CASE_SENSITIVE.camelName(),
Boolean.toString(
+ config.getTableCache() == null
Review Comment:
I missed that, but it looks like it can be nullable only in tests which
should probably be fixed. But that's outside the scope of this PR and can be
done separately.
##########
pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java:
##########
@@ -2994,25 +2994,29 @@ public void testCaseSensitivityV2()
int daysSinceEpoch = 16138;
int hoursSinceEpoch = 16138 * 24;
int secondsSinceEpoch = 16138 * 24 * 60 * 60;
- List<String> baseQueries = Arrays.asList("SELECT * FROM mytable",
- "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable",
+ List<String> baseQueries = Arrays.asList("SELECT * FROM mytable limit
10000",
+ "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable limit 10000",
"SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable order by DaysSinceEpoch "
+ "limit 10000",
"SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable order by timeConvert"
+ "(DaysSinceEpoch,'DAYS','SECONDS') DESC limit 10000",
- "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " +
daysSinceEpoch,
- "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch,
- "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch,
- "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM
mytable",
+ "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " +
daysSinceEpoch + " limit 10000",
+ "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch
+ + " limit 10000",
+ "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch
+ + " limit 10000",
+ "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable
limit 10000",
"SELECT COUNT(*) FROM mytable GROUP BY
dateTimeConvert(DaysSinceEpoch,'1:DAYS:EPOCH','1:HOURS:EPOCH',"
- + "'1:HOURS')");
+ + "'1:HOURS') limit 10000");
List<String> queries = new ArrayList<>();
baseQueries.forEach(q -> queries.add(q.replace("mytable",
"MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
baseQueries.forEach(
- q -> queries.add(q.replace("mytable",
"MYDB.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
+ q -> queries.add(q.replace("mytable",
"DEFAULT.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
Review Comment:
Now that both the query engines should be case insensitive by default, can
we consolidate the two tests into a single parameterized one?
##########
pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java:
##########
@@ -2994,25 +2994,29 @@ public void testCaseSensitivityV2()
int daysSinceEpoch = 16138;
int hoursSinceEpoch = 16138 * 24;
int secondsSinceEpoch = 16138 * 24 * 60 * 60;
- List<String> baseQueries = Arrays.asList("SELECT * FROM mytable",
- "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable",
+ List<String> baseQueries = Arrays.asList("SELECT * FROM mytable limit
10000",
+ "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable limit 10000",
"SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable order by DaysSinceEpoch "
+ "limit 10000",
"SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS')
FROM mytable order by timeConvert"
+ "(DaysSinceEpoch,'DAYS','SECONDS') DESC limit 10000",
- "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " +
daysSinceEpoch,
- "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch,
- "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch,
- "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM
mytable",
+ "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " +
daysSinceEpoch + " limit 10000",
+ "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch
+ + " limit 10000",
+ "SELECT count(*) FROM mytable WHERE
timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch
+ + " limit 10000",
+ "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable
limit 10000",
"SELECT COUNT(*) FROM mytable GROUP BY
dateTimeConvert(DaysSinceEpoch,'1:DAYS:EPOCH','1:HOURS:EPOCH',"
- + "'1:HOURS')");
+ + "'1:HOURS') limit 10000");
List<String> queries = new ArrayList<>();
baseQueries.forEach(q -> queries.add(q.replace("mytable",
"MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
baseQueries.forEach(
- q -> queries.add(q.replace("mytable",
"MYDB.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
+ q -> queries.add(q.replace("mytable",
"DEFAULT.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch")));
Review Comment:
Ah, I see. Thanks for the explanation!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]