pravin1406 commented on issue #5803:
URL: https://github.com/apache/kyuubi/issues/5803#issuecomment-2656986978

   @yaooqinn 
   
   I'm having a similar issue. We are not able to access iceberg metadata 
tables. Have attached table plan with and without authz plugin enabled.  we 
have table level permission, and don't expect to give metadata table level 
permissions seperately ?  is there in work in progress to serve this case ?
   
   `| Error occurred during query planning:              |
   | Permission denied: user [dmu_mesh_qa1] does not have [select] privilege on 
[mesh_qa1_mart.Testicec1/files/content,mesh_qa1_mart.Testicec1/files/file_path,mesh_qa1_mart.Testicec1/files/file_format,mesh_qa1_mart.Testicec1/files/spec_id,mesh_qa1_mart.Testicec1/files/record_count,mesh_qa1_mart.Testicec1/files/file_size_in_bytes,mesh_qa1_mart.Testicec1/files/column_sizes,mesh_qa1_mart.Testicec1/files/value_counts,mesh_qa1_mart.Testicec1/files/null_value_counts,mesh_qa1_mart.Testicec1/files/nan_value_counts,mesh_qa1_mart.Testicec1/files/lower_bounds,mesh_qa1_mart.Testicec1/files/upper_bounds,mesh_qa1_mart.Testicec1/files/key_metadata,mesh_qa1_mart.Testicec1/files/split_offsets,mesh_qa1_mart.Testicec1/files/equality_ids,mesh_qa1_mart.Testicec1/files/sort_order_id,mesh_qa1_mart.Testicec1/files/readable_metrics]
 |`
   
   `| == Parsed Logical Plan ==
   'GlobalLimit 10
   +- 'LocalLimit 10
      +- 'Project [*]
         +- 'UnresolvedRelation [mesh_qa1_mart, Testicec1, files], [], false
   
   == Analyzed Logical Plan ==
   content: int, file_path: string, file_format: string, spec_id: int, 
record_count: bigint, file_size_in_bytes: bigint, column_sizes: 
map<int,bigint>, value_counts: map<int,bigint>, null_value_counts: 
map<int,bigint>, nan_value_counts: map<int,bigint>, lower_bounds: 
map<int,binary>, upper_bounds: map<int,binary>, key_metadata: binary, 
split_offsets: array<bigint>, equality_ids: array<int>, sort_order_id: int
   GlobalLimit 10
   +- LocalLimit 10
      +- Project [content#153, file_path#154, file_format#155, spec_id#156, 
record_count#157L, file_size_in_bytes#158L, column_sizes#159, value_counts#160, 
null_value_counts#161, nan_value_counts#162, lower_bounds#163, 
upper_bounds#164, key_metadata#165, split_offsets#166, equality_ids#167, 
sort_order_id#168]
         +- SubqueryAlias spark_catalog.mesh_qa1_mart.Testicec1.files
            +- RelationV2[content#153, file_path#154, file_format#155, 
spec_id#156, record_count#157L, file_size_in_bytes#158L, column_sizes#159, 
value_counts#160, null_value_counts#161, nan_value_counts#162, 
lower_bounds#163, upper_bounds#164, key_metadata#165, split_offsets#166, 
equality_ids#167, sort_order_id#168] spark_catalog.mesh_qa1_mart.Testicec1.files
   
   == Optimized Logical Plan ==
   GlobalLimit 10
   +- LocalLimit 10
      +- RelationV2[content#153, file_path#154, file_format#155, spec_id#156, 
record_count#157L, file_size_in_bytes#158L, column_sizes#159, value_counts#160, 
null_value_counts#161, nan_value_counts#162, lower_bounds#163, 
upper_bounds#164, key_metadata#165, split_offsets#166, equality_ids#167, 
sort_order_id#168] spark_catalog.mesh_qa1_mart.Testicec1.files
   
   == Physical Plan ==
   CollectLimit 10
   +- *(1) Project [content#153, file_path#154, file_format#155, spec_id#156, 
record_count#157L, file_size_in_bytes#158L, column_sizes#159, value_counts#160, 
null_value_counts#161, nan_value_counts#162, lower_bounds#163, 
upper_bounds#164, key_metadata#165, split_offsets#166, equality_ids#167, 
sort_order_id#168]
      +- BatchScan[content#153, file_path#154, file_format#155, spec_id#156, 
record_count#157L, file_size_in_bytes#158L, column_sizes#159, value_counts#160, 
null_value_counts#161, nan_value_counts#162, lower_bounds#163, 
upper_bounds#164, key_metadata#165, split_offsets#166, equality_ids#167, 
sort_order_id#168] spark_catalog.mesh_qa1_mart.Testicec1.files [filters=] 
RuntimeFilters: []
    |`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org

Reply via email to