abhishekrb19 commented on code in PR #16223:
URL: https://github.com/apache/druid/pull/16223#discussion_r1567873060


##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteCatalogIngestionDmlTest.java:
##########
@@ -210,11 +237,492 @@ public DruidTable resolveDatasource(
           final DatasourceTable.PhysicalDatasourceMetadata dsMetadata
       )
       {
-        if (resolvedTables.get(tableName) != null) {
-          return resolvedTables.get(tableName);
+        if (RESOLVED_TABLES.get(tableName) != null) {
+          return RESOLVED_TABLES.get(tableName);
         }
         return dsMetadata == null ? null : new DatasourceTable(dsMetadata);
       }
     };
   }
+
+  /**
+   * If the segment grain is given in the catalog and absent in the 
PARTITIONED BY clause in the query, then use the
+   * value from the catalog.
+   */
+  @Test
+  public void testInsertHourGrainPartitonedByFromCatalog()
+  {
+    testIngestionQuery()
+        .sql(StringUtils.format(dmlPrefixPattern, "hourDs") + "\n" +
+             "SELECT * FROM foo")
+        .authentication(CalciteTests.SUPER_USER_AUTH_RESULT)
+        .expectTarget("hourDs", FOO_TABLE_SIGNATURE)
+        .expectResources(dataSourceWrite("hourDs"), dataSourceRead("foo"))
+        .expectQuery(
+            newScanQueryBuilder()
+                .dataSource("foo")
+                .intervals(querySegmentSpec(Filtration.eternity()))
+                // Scan query lists columns in alphabetical order independent 
of the
+                // SQL project list or the defined schema.
+                .columns("__time", "cnt", "dim1", "dim2", "dim3", "m1", "m2", 
"unique_dim1")
+                .context(queryContextWithGranularity(Granularities.HOUR))
+                .build()
+        )
+        .verify();
+  }
+
+  /**
+   * If the segment grain is given in the catalog, and also by PARTITIONED BY, 
then
+   * the query value is used.
+   */
+  @Test
+  public void testInsertHourGrainWithDayPartitonedByFromQuery()
+  {
+    testIngestionQuery()
+        .sql(StringUtils.format(dmlPrefixPattern, "hourDs") + "\n" +
+             "SELECT * FROM foo\n" +
+             "PARTITIONED BY day")
+        .authentication(CalciteTests.SUPER_USER_AUTH_RESULT)
+        .expectTarget("hourDs", FOO_TABLE_SIGNATURE)
+        .expectResources(dataSourceWrite("hourDs"), dataSourceRead("foo"))
+        .expectQuery(
+            newScanQueryBuilder()
+                .dataSource("foo")
+                .intervals(querySegmentSpec(Filtration.eternity()))
+                // Scan query lists columns in alphabetical order independent 
of the
+                // SQL project list or the defined schema.
+                .columns("__time", "cnt", "dim1", "dim2", "dim3", "m1", "m2", 
"unique_dim1")
+                .context(queryContextWithGranularity(Granularities.DAY))
+                .build()
+        )
+        .verify();
+  }
+
+  /**
+   * If the segment grain is absent in the catalog and absent in the 
PARTITIONED BY clause in the query, then
+   * validation error.
+   */
+  @Test
+  public void testInsertNoPartitonedByFromCatalog()
+  {
+    testIngestionQuery()
+        .sql(StringUtils.format(dmlPrefixPattern, "noPartitonedBy") + "\n" +
+             "SELECT * FROM foo")
+        .authentication(CalciteTests.SUPER_USER_AUTH_RESULT)
+        .authentication(CalciteTests.SUPER_USER_AUTH_RESULT)
+        .expectValidationError(
+            DruidException.class,
+            StringUtils.format("Operation [%s] requires a PARTITIONED BY to be 
explicitly defined, but none was found.", operationName)

Review Comment:
   Perhaps be more explicit in the error message (the actual code also needs to 
change):
   ```suggestion
               StringUtils.format("Operation [%s] requires a PARTITIONED BY 
clause to be explicitly defined in the query or the catalog, but none was 
found.", operationName)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to