wangxiaojing opened a new issue, #10477:
URL: https://github.com/apache/gravitino/issues/10477
### Version
main branch
### Describe what's wrong
When writing to a Paimon partitioned table with Flink, the partition
information is not updated in the Hive metastore
### Error message and/or stacktrace
```
CREATE TABLE `paimon_catalog`.`paimon_test`.`paimon_streaming_flow_hi` (
`id` DECIMAL(20, 0),
`dt` VARCHAR(2147483647)
) COMMENT 'paimon '
PARTITIONED BY (`dt`)
WITH (
'bucket' = '-1',
'path' = 'bos://xxxxx',
'table.user-type' = '1',
'metastore.partitioned-table' = 'true',
'deletion-vectors.enabled' = 'true',
'partition.expiration-time' = '7d',
'consumer.expiration-time' = '3d'
);
CREATE TABLE source (
`id` DECIMAL(20, 0)
) WITH (
'connector' = 'datagen'
);
```
insert into `paimon_catalog`.`paimon_test`.`paimon_streaming_flow_hi`
select id ,'2026-03-19' from source;
When I run SHOW PARTITIONS, the partition 2026-03-19 is not displayed.
```
show partitions `paimon_test`.`paimon_streaming_flow_hi`;
+----------------+
| partition |
+----------------+
| dt=2026-03-01 |
| dt=2026-03-02 |
+----------------+
2 rows selected (0.138 seconds)
```
### How to reproduce
Without this override, calling getTable() on the base catalog defaults to
the Gravitino REST API, which returns a plain CatalogTable without the
necessary Paimon-specific context.
When Flink's AbstractFlinkTableFactory.buildPaimonTable() receives this
plain table, it creates a FileStoreTable with CatalogEnvironment.empty() (where
catalogLoader is null). Consequently, partitionHandler() returns null, the
AddPartitionCommitCallback is never registered, and the Hive partition metadata
is never updated after commits.
This fix resolves the issue by ensuring the correct, context-rich Paimon
table object is returned, enabling the partition update mechanism to function
properly.
### Additional context
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]