twalthr opened a new pull request, #26331:
URL: https://github.com/apache/flink/pull/26331

   ## What is the purpose of the change
   
   This enables PTFs via Table API. It supports both the object-based as well 
as the SQL-based representation of the query operation tree. This PR fixes 
various issues on the way around bugs introduced in previous PTF-related PRs. 
   
   Example:
   ```
   public final class PTFExample {
   
       public static void main(String[] args) throws Exception {
           final EnvironmentSettings settings = 
EnvironmentSettings.inStreamingMode();
           final TableEnvironment env = TableEnvironment.create(settings);
   
           env.fromValues("Bob", "Alice", "Bob")
                   .as("name")
                   .process(ProcessFunctionWithRowSemantics.class)
                   .execute()
                   .print();
   
           env.fromValues("Bob", "Alice", "Bob")
                   .as("name")
                   .partitionBy($("name"))
                   .process(ProcessFunctionWithSetSemantics.class)
                   .execute()
                   .print();
       }
   
       public static class ProcessFunctionWithRowSemantics extends 
ProcessTableFunction<String> {
           public void eval(@ArgumentHint(ArgumentTrait.TABLE_AS_ROW) Row 
input) {
               collect("Hello " + input.getFieldAs("name") + "!");
           }
       }
   
       public static class ProcessFunctionWithSetSemantics extends 
ProcessTableFunction<String> {
   
           public static class CountState {
               public long counter = 0L;
           }
   
           public void eval(
                   @StateHint CountState state, 
@ArgumentHint(ArgumentTrait.TABLE_AS_SET) Row input) {
               state.counter++;
               collect("Hello " + input.getFieldAs("name") + ", your " + 
state.counter + " time?");
           }
       }
   }
   ```
   
   
   ## Brief change log
   
   - Don't support optional table arguments as they don't go well with 
position-based default argument insertion.
   - Support descriptor literals in Table API
   - Make TableReferenceExpression serializable
   - Return time column in TableSemantics during runtime
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   - `ProcessTableFunctionTest`
   - `ProcessTableFunctionSemanticTests`
   - `QueryOperationSqlSemanticTest`
   - `QueryOperationSqlSerializationTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? yes
     - If yes, how is the feature documented? JavaDocs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to