autophagy opened a new pull request, #26166:
URL: https://github.com/apache/flink/pull/26166
## What is the purpose of the change
Currently, the TablePipeline descriptor is not supported in the Pyflink
Table API, so API methods such as `insert_into` on a table environment are
subsequently not supported. This PR introduces the `TablePipeline` descriptor
so that users can setup a table pipeline, explain it, execute it, etc.
## Brief change log
- Introduced `TablePipeline` descriptor.
- Introduced the `ObjectIdentifier` catalog entity.
- Adds `StatementSet.add` to add a `TablePipeline` to the `StatementSet`.
- Adds `Table.insert_into` to declare a `TablePipeline` with a given table
path or table descriptor.
## Verifying this change
This change is already covered by existing tests, such as *(please describe
tests)*:
- Catalog and Table API completeness tests
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (yes, functions added to `Table` and `StatementSet`)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (no)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (yes)
- If yes, how is the feature documented? (docs)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]