Copilot commented on code in PR #10194:
URL: https://github.com/apache/gravitino/pull/10194#discussion_r2883455141
##########
spark-connector/spark-common/src/test/java/org/apache/gravitino/spark/connector/integration/test/authorization/SparkAuthorizationIT.java:
##########
@@ -313,6 +317,76 @@ public void
testCreateTableIfNotExistsWithoutLoadTablePrivilege() {
}
}
+ @Test
+ @Order(6)
+ public void testLoadTableForWriting() {
+ GravitinoMetalake gravitinoMetalake = client.loadMetalake(METALAKE);
+ String testTable = "test_write_table";
+ String testRole = "role_no_modify";
+
+ // Temporarily revoke ROLE from user
+ gravitinoMetalake.revokeRolesFromUser(ImmutableList.of(ROLE), NORMAL_USER);
+
+ // Create a role with SELECT_TABLE but without MODIFY_TABLE privilege
+ SecurableObject catalogObject =
+ SecurableObjects.ofCatalog(
+ JDBC_CATALOG,
+ ImmutableList.of(
+ Privileges.UseCatalog.allow(),
+ Privileges.UseSchema.allow(),
+ Privileges.SelectTable.allow(),
+ Privileges.CreateTable.allow(),
+ Privileges.ModifyTable.allow()));
+ gravitinoMetalake.createRole(testRole, new HashMap<>(),
ImmutableList.of(catalogObject));
+ gravitinoMetalake.grantRolesToUser(ImmutableList.of(testRole),
NORMAL_USER);
+
+ TableCatalog tableCatalog =
gravitinoMetalake.loadCatalog(JDBC_CATALOG).asTableCatalog();
+ try {
+ // Create a test table first
+ normalUserSparkSession.sql("use " + JDBC_CATALOG);
+ normalUserSparkSession.sql("use " + JDBC_DATABASE);
+ tableCatalog.createTable(
+ NameIdentifier.of(JDBC_DATABASE, testTable),
+ new Column[] {
+ Column.of("id", Types.StringType.get()), Column.of("name",
Types.StringType.get())
+ },
+ "",
+ new HashMap<>());
+
+ // Deny MODIFY_TABLE privilege explicitly
+ gravitinoMetalake.grantPrivilegesToRole(
+ testRole,
+ MetadataObjects.of(
+ ImmutableList.of(JDBC_CATALOG, JDBC_DATABASE, testTable),
MetadataObject.Type.TABLE),
+ ImmutableList.of(Privileges.ModifyTable.deny()));
+
+ // Assert INSERT behavior - Spark 3.5+ should throw ForbiddenException,
+ // lower versions won't because they don't support TableWritePrivilege
+ assertInsertBehaviorWithoutModifyPrivilege(testTable);
+ } finally {
+ // Clean up
+ try {
+ tableCatalog.dropTable(NameIdentifier.of(JDBC_DATABASE, testTable));
+ } catch (Exception ignored) {
+ // Ignore cleanup errors
Review Comment:
Avoid catching a generic `Exception` here. It can hide real failures (e.g.,
authorization errors) and makes diagnosing test flakiness harder. Catch the
specific exception(s) `dropTable` can throw, or at least rethrow/assert after
cleanup failures so the test doesn't silently pass in a bad state.
```suggestion
} catch (Exception e) {
Assertions.fail("Failed to drop test table " + testTable, e);
```
##########
spark-connector/spark-common/src/main/java/org/apache/gravitino/spark/connector/catalog/BaseCatalog.java:
##########
@@ -498,6 +514,24 @@ public UnboundFunction loadFunction(Identifier ident)
throws NoSuchFunctionExcep
throw new NoSuchFunctionException(ident);
}
+ protected Table loadTableForWriting(Identifier ident) throws
NoSuchTableException {
+ try {
+ org.apache.gravitino.rel.Table gravitinoTable =
loadGravitinoTableForWriting(ident);
+ org.apache.spark.sql.connector.catalog.Table sparkTable =
loadSparkTable(ident);
+ // Will create a catalog specific table
+ return createSparkTable(
+ ident,
+ gravitinoTable,
+ sparkTable,
+ sparkCatalog,
+ propertiesConverter,
+ sparkTransformConverter,
+ sparkTypeConverter);
+ } catch (org.apache.gravitino.exceptions.NoSuchTableException e) {
+ throw new NoSuchTableException(ident);
+ }
Review Comment:
This catch block is likely ineffective: `loadGravitinoTableForWriting`
already translates Gravitino's `NoSuchTableException` into Spark's
`NoSuchTableException`, and `loadSparkTable` wraps missing-table as a
`RuntimeException`. Consider removing this catch, or instead
catching/propagating Spark's `NoSuchTableException` so the error handling is
consistent and not misleading.
```suggestion
org.apache.gravitino.rel.Table gravitinoTable =
loadGravitinoTableForWriting(ident);
org.apache.spark.sql.connector.catalog.Table sparkTable =
loadSparkTable(ident);
// Will create a catalog specific table
return createSparkTable(
ident,
gravitinoTable,
sparkTable,
sparkCatalog,
propertiesConverter,
sparkTransformConverter,
sparkTypeConverter);
```
##########
spark-connector/spark-common/src/test/java/org/apache/gravitino/spark/connector/integration/test/hive/SparkHiveCatalogIT.java:
##########
@@ -351,7 +352,7 @@ void testHiveFormatWithLocationTable() {
SparkTableInfo tableInfo = getTableInfo(tableName);
checkTableReadWrite(tableInfo);
-
Assertions.assertTrue(tableInfo.getTableLocation().equals(hdfs.getUri() +
location));
+
Assertions.assertTrue(tableInfo.getTableLocation().equals(location));
Review Comment:
Use `Assertions.assertEquals(location, tableInfo.getTableLocation())` (or an
equivalent matcher) instead of `assertTrue(a.equals(b))` so failures show
expected vs actual values and avoid relying on manual `.equals()` calls (which
is inconsistent with other assertions in this test class).
```suggestion
Assertions.assertEquals(location,
tableInfo.getTableLocation());
```
##########
spark-connector/spark-common/src/test/java/org/apache/gravitino/spark/connector/integration/test/authorization/SparkAuthorizationIT.java:
##########
@@ -313,6 +317,76 @@ public void
testCreateTableIfNotExistsWithoutLoadTablePrivilege() {
}
}
+ @Test
+ @Order(6)
+ public void testLoadTableForWriting() {
+ GravitinoMetalake gravitinoMetalake = client.loadMetalake(METALAKE);
+ String testTable = "test_write_table";
+ String testRole = "role_no_modify";
+
+ // Temporarily revoke ROLE from user
+ gravitinoMetalake.revokeRolesFromUser(ImmutableList.of(ROLE), NORMAL_USER);
+
+ // Create a role with SELECT_TABLE but without MODIFY_TABLE privilege
+ SecurableObject catalogObject =
+ SecurableObjects.ofCatalog(
+ JDBC_CATALOG,
+ ImmutableList.of(
+ Privileges.UseCatalog.allow(),
+ Privileges.UseSchema.allow(),
+ Privileges.SelectTable.allow(),
+ Privileges.CreateTable.allow(),
+ Privileges.ModifyTable.allow()));
Review Comment:
The role is described as "without MODIFY_TABLE privilege", but this role is
currently created with `Privileges.ModifyTable.allow()` at the catalog scope.
This makes the test rely on deny/allow precedence rules and can mask
regressions if precedence changes. Remove the catalog-level MODIFY_TABLE allow
(or update the comment and assertions to reflect the intended privilege model).
```suggestion
Privileges.CreateTable.allow()));
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]