This is an automated email from the ASF dual-hosted git repository.
chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git
The following commit(s) were added to refs/heads/master by this push:
new 526a8aa9c8 [KYUUBI #7299] [AUTHZ] Remove redundant concatenation with
the list of conditions
526a8aa9c8 is described below
commit 526a8aa9c81f27f139c7e73c6db569fd31aec5c3
Author: attilapiros <[email protected]>
AuthorDate: Fri Jan 9 09:56:53 2026 +0800
[KYUUBI #7299] [AUTHZ] Remove redundant concatenation with the list of
conditions
### Why are the changes needed?
This concatenation with the list of conditions `conditionList` is redundant
as the `columnPrune` method already contains it.
### How was this patch tested?
Existing unit tests.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #7299 from attilapiros/apiros_fix_1.
Closes #7299
18af3589a [attilapiros] Remove redundant concatenation with the list of
conditions
Authored-by: attilapiros <[email protected]>
Signed-off-by: Cheng Pan <[email protected]>
---
.../scala/org/apache/kyuubi/plugin/spark/authz/PrivilegesBuilder.scala | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
a/extensions/spark/kyuubi-spark-authz/src/main/scala/org/apache/kyuubi/plugin/spark/authz/PrivilegesBuilder.scala
b/extensions/spark/kyuubi-spark-authz/src/main/scala/org/apache/kyuubi/plugin/spark/authz/PrivilegesBuilder.scala
index 0bd55c8add..6bebfc8761 100644
---
a/extensions/spark/kyuubi-spark-authz/src/main/scala/org/apache/kyuubi/plugin/spark/authz/PrivilegesBuilder.scala
+++
b/extensions/spark/kyuubi-spark-authz/src/main/scala/org/apache/kyuubi/plugin/spark/authz/PrivilegesBuilder.scala
@@ -69,7 +69,7 @@ object PrivilegesBuilder {
if (projectionList.isEmpty) {
privilegeObjects += PrivilegeObject(table, plan.output.map(_.name))
} else {
- val cols = columnPrune(projectionList ++ conditionList, plan.outputSet)
+ val cols = columnPrune(projectionList, plan.outputSet)
privilegeObjects += PrivilegeObject(table, cols.map(_.name).distinct)
}
}