twalthr commented on a change in pull request #18134:
URL: https://github.com/apache/flink/pull/18134#discussion_r771245707



##########
File path: flink-dist/src/main/assemblies/opt.xml
##########
@@ -59,11 +52,17 @@
                        <fileMode>0644</fileMode>
                </file>
 
-               <!-- SQL Client -->
+               <!-- SQL -->

Review comment:
       nit: for consistency `Table/SQL dependencies`

##########
File path: flink-dist/src/main/assemblies/bin.xml
##########
@@ -79,11 +79,31 @@ under the License.
                        <fileMode>0644</fileMode>
                </file>
 
-               <!-- Table/SQL Uber JAR -->
+               <!-- Table jars -->
                <file>
-                       
<source>../flink-table/flink-table-uber/target/flink-table-uber_${scala.binary.version}-${project.version}.jar</source>
+                       
<source>../flink-table/flink-table-api-uber/target/flink-table-api-uber-${project.version}.jar</source>
                        <outputDirectory>lib/</outputDirectory>
-                       
<destName>flink-table_${scala.binary.version}-${project.version}.jar</destName>
+                       
<destName>flink-table-api-uber-${project.version}.jar</destName>
+                       <fileMode>0644</fileMode>
+               </file>
+               <file>
+                       
<source>../flink-table/flink-table-runtime/target/flink-table-runtime-${project.version}.jar</source>
+                       <outputDirectory>lib/</outputDirectory>
+                       
<destName>flink-table-runtime-${project.version}.jar</destName>
+                       <fileMode>0644</fileMode>
+               </file>
+               <file>
+                       
<source>../flink-table/flink-table-planner-loader/target/flink-table-planner-loader-${project.version}.jar</source>
+                       <outputDirectory>lib/</outputDirectory>
+                       
<destName>flink-table-planner-loader-${project.version}.jar</destName>
+                       <fileMode>0644</fileMode>
+               </file>
+
+               <!-- CEP (Required for Table MATCH RECOGNIZE) -->

Review comment:
       nit: `Required for SQL MATCH_RECOGNIZE`

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions

Review comment:
       `Internal data types definitions` -> `Built-in function definitions`

##########
File path: 
flink-table/flink-table-planner-loader/src/main/java/org/apache/flink/table/planner/loader/DelegateExecutorFactory.java
##########
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.loader;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.delegation.Executor;
+import org.apache.flink.table.delegation.ExecutorFactory;
+import org.apache.flink.table.delegation.StreamExecutorFactory;
+
+/** Delegate of {@link ExecutorFactory}. */
+public class DelegateExecutorFactory extends 
BaseDelegateFactory<StreamExecutorFactory>

Review comment:
       as always: annotation is missing

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors
+  * Other core APIs such as `Schema`
+  * Utilities to deal with type system, internal data types and printing
+  * When implementing a format, you usually need to depend only on this module
+
+### API
+
+* `flink-table-api-java`: 
+  * Java APIs for Table API and SQL
+  * Package `org.apache.flink.table.delegation`, which serves as entrypoint 
for all planner capabilities
+* `flink-table-api-scala`: Scala APIs for Table API and SQL
+* `flink-table-api-bridge-base`: Base classes for APIs to bridge between Table 
API and DataStream API
+* `flink-table-api-java-bridge`: 
+  * Java APIs to bridge between Table API and DataStream API
+  * When implementing a connector, you usually need to depend only on this 
module, in order to bridge your connector implementation developed with 
DataStream to Table API
+* `flink-table-api-scala-bridge`: Scala APIs to bridge between Table API and 
DataStream API
+* `flink-table-api-uber`: Uber JAR bundling `flink-table-common` and all the 
Java API modules, including 3rd party dependencies.

Review comment:
       This indicates that we should discuss the package name again. It is not 
obvious that Scala is not included or the Java bridging module.

##########
File path: flink-end-to-end-tests/run-nightly-tests.sh
##########
@@ -185,7 +185,8 @@ function run_group_2 {
 
     run_test "DataSet allround end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_batch_allround.sh"
     run_test "Batch SQL end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_batch_sql.sh"
-    run_test "Streaming SQL end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_sql.sh" "skip_check_exceptions"
+    run_test "Streaming SQL end-to-end test using planner loader" 
"$END_TO_END_DIR/test-scripts/test_streaming_sql.sh" "skip_check_exceptions"
+    run_test "Streaming SQL end-to-end test using planner with scala version" 
"$END_TO_END_DIR/test-scripts/test_streaming_sql_scala_planner_jar.sh" 
"skip_check_exceptions"

Review comment:
       nit: here and at other locations, we should write `Scala` instead of 
`scala`

##########
File path: 
flink-end-to-end-tests/test-scripts/test_streaming_sql_scala_planner_jar.sh
##########
@@ -0,0 +1,42 @@
+#!/usr/bin/env bash
+################################################################################
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+################################################################################
+
+source "$(dirname "$0")"/common.sh
+
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/flink-stream-sql-test/target/StreamSQLTestProgram.jar

Review comment:
       let's avoid duplicate code and just parameterize the other test?

##########
File path: flink-examples/flink-examples-table/pom.xml
##########
@@ -83,6 +78,18 @@ under the License.
                        <version>${project.version}</version>
                        <scope>test</scope>
                </dependency>
+               <dependency>

Review comment:
       can we agree on an dependency order in the `pom.xml`s? common, api, 
planner(loader), runtime

##########
File path: flink-end-to-end-tests/test-scripts/common.sh
##########
@@ -147,6 +147,16 @@ function add_optional_plugin() {
     cp "$FLINK_DIR/opt/flink-$plugin"*".jar" "$plugin_dir"
 }
 
+function swap_planner_loader_with_planner_scala() {
+  mv "$FLINK_DIR/lib/flink-table-planner-loader"*".jar" "$FLINK_DIR/opt"
+  mv "$FLINK_DIR/opt/flink-table-planner_2"*".jar" "$FLINK_DIR/lib"

Review comment:
       remove `2`?

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors
+  * Other core APIs such as `Schema`

Review comment:
       ```
   Core APIs for extension points such as `Schema`
   ```

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors

Review comment:
       `Extension points for catalogs, formats, connectors`

##########
File path: flink-end-to-end-tests/test-scripts/test_pyflink_yarn.sh
##########
@@ -29,6 +29,7 @@ echo "pytest==4.4.1" > "${REQUIREMENTS_PATH}"
 # These tests are known to fail on JDK11. See FLINK-13719
 cd "${CURRENT_DIR}/../"
 source "${CURRENT_DIR}"/common_yarn_docker.sh
+

Review comment:
       avoid unecessary changes

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors
+  * Other core APIs such as `Schema`
+  * Utilities to deal with type system, internal data types and printing
+  * When implementing a format, you usually need to depend only on this module
+
+### API
+
+* `flink-table-api-java`: 
+  * Java APIs for Table API and SQL
+  * Package `org.apache.flink.table.delegation`, which serves as entrypoint 
for all planner capabilities
+* `flink-table-api-scala`: Scala APIs for Table API and SQL
+* `flink-table-api-bridge-base`: Base classes for APIs to bridge between Table 
API and DataStream API
+* `flink-table-api-java-bridge`: 
+  * Java APIs to bridge between Table API and DataStream API
+  * When implementing a connector, you usually need to depend only on this 
module, in order to bridge your connector implementation developed with 
DataStream to Table API
+* `flink-table-api-scala-bridge`: Scala APIs to bridge between Table API and 
DataStream API
+* `flink-table-api-uber`: Uber JAR bundling `flink-table-common` and all the 
Java API modules, including 3rd party dependencies.
+
+### Runtime
+
+* `flink-table-code-splitter`: Tool to split generated Java code so that each 
method does not exceed the limit of 64KB.
+* `flink-table-runtime`:
+  * Operator implementations
+  * Built-in functions implementations
+  * Type system implementation, including readers/writers, converters and 
utilities
+  * Raw format
+  * The produced jar includes all the classes from this module and 
`flink-table-code-splitter`, including 3rd party dependencies
+
+### Parser and planner
+
+* `flink-sql-parser`: Default ANSI SQL parser implementation
+* `flink-sql-parser-hive`: Hive SQL dialect parser implementation
+* `flink-table-planner`:
+  * AST and Semantic tree
+  * SQL validator
+  * Planner and rules implementation

Review comment:
       Add `Code generator`

##########
File path: flink-end-to-end-tests/flink-stream-sql-test/pom.xml
##########
@@ -30,19 +30,12 @@
 
        <modelVersion>4.0.0</modelVersion>
 
-       <artifactId>flink-stream-sql-test_${scala.binary.version}</artifactId>
+       <artifactId>flink-stream-sql-test</artifactId>
        <name>Flink : E2E Tests : Stream SQL</name>
        <packaging>jar</packaging>
 
        <dependencies>
-               <dependency>
-                       <groupId>org.apache.flink</groupId>
-                       
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>

Review comment:
       We should declare all directly used dependencies. Therefore, we should 
declare `flink-streaming-java`

##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors
+  * Other core APIs such as `Schema`
+  * Utilities to deal with type system, internal data types and printing
+  * When implementing a format, you usually need to depend only on this module
+
+### API
+
+* `flink-table-api-java`: 
+  * Java APIs for Table API and SQL
+  * Package `org.apache.flink.table.delegation`, which serves as entrypoint 
for all planner capabilities
+* `flink-table-api-scala`: Scala APIs for Table API and SQL
+* `flink-table-api-bridge-base`: Base classes for APIs to bridge between Table 
API and DataStream API
+* `flink-table-api-java-bridge`: 
+  * Java APIs to bridge between Table API and DataStream API
+  * When implementing a connector, you usually need to depend only on this 
module, in order to bridge your connector implementation developed with 
DataStream to Table API
+* `flink-table-api-scala-bridge`: Scala APIs to bridge between Table API and 
DataStream API
+* `flink-table-api-uber`: Uber JAR bundling `flink-table-common` and all the 
Java API modules, including 3rd party dependencies.
+
+### Runtime
+
+* `flink-table-code-splitter`: Tool to split generated Java code so that each 
method does not exceed the limit of 64KB.
+* `flink-table-runtime`:
+  * Operator implementations
+  * Built-in functions implementations
+  * Type system implementation, including readers/writers, converters and 
utilities
+  * Raw format
+  * The produced jar includes all the classes from this module and 
`flink-table-code-splitter`, including 3rd party dependencies
+
+### Parser and planner
+
+* `flink-sql-parser`: Default ANSI SQL parser implementation
+* `flink-sql-parser-hive`: Hive SQL dialect parser implementation
+* `flink-table-planner`:
+  * AST and Semantic tree
+  * SQL validator
+  * Planner and rules implementation
+  * Two jars are produced: one doesn't have any classifier and bundles all the 
classes from this module together with the two parsers, including 3rd party 
dependencies, while the other jar, classified as `loader-bundle`, extends the 
first jar including scala dependencies.
+* `flink-table-planner-loader`: Loader for `flink-table-planner` that loads 
the planner in a separate classpath, isolating the Scala version used to 
compile the planner.
+
+### SQL client
+
+* `flink-sql-client`: CLI tool to submit queries to a Flink cluster
+
+### Notes
+
+No module except `flink-table-planner` should depend on `flink-table-runtime` 
in production classpath, 

Review comment:
       "depending" is ok but only as `provided` maybe you can rephrase this 
paragraph

##########
File path: flink-table/flink-table-planner-loader/pom.xml
##########
@@ -0,0 +1,165 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0";
+                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+                xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+       <modelVersion>4.0.0</modelVersion>
+
+       <parent>
+               <groupId>org.apache.flink</groupId>
+               <artifactId>flink-table</artifactId>
+               <version>1.15-SNAPSHOT</version>
+               <relativePath>..</relativePath>
+       </parent>
+
+       <artifactId>flink-table-planner-loader</artifactId>
+       <name>Flink : Table : Planner Loader</name>
+       <packaging>jar</packaging>
+       <description>This module contains the mechanism for loading 
flink-table-planner through a separate classloader.</description>
+
+       <dependencies>
+               <dependency>
+                       <!-- Component class loader -->
+                       <groupId>org.apache.flink</groupId>
+                       <artifactId>flink-core</artifactId>
+                       <version>${project.version}</version>
+               </dependency>
+               <dependency>
+                       <!-- org.apache.flink.table.delegation interfaces -->
+                       <groupId>org.apache.flink</groupId>
+                       <artifactId>flink-table-api-java</artifactId>
+                       <version>${project.version}</version>
+               </dependency>
+               <dependency>
+                       <!-- StreamExecutorFactory -->
+                       <groupId>org.apache.flink</groupId>
+                       <artifactId>flink-table-api-bridge-base</artifactId>
+                       <version>${project.version}</version>
+               </dependency>
+
+               <dependency>
+                       <!-- Test utils -->
+                       <groupId>org.apache.flink</groupId>
+                       <artifactId>flink-test-utils-junit</artifactId>
+               </dependency>
+
+               <dependency>
+                       <!-- Ensures that flink-table-planner is built 
beforehand, in order to bundle the jar -->
+                       <groupId>org.apache.flink</groupId>
+                       
<artifactId>flink-table-planner_${scala.binary.version}</artifactId>
+                       <version>${project.version}</version>
+                       <!-- We don't want any production code from 
flink-table-planner-loader to reference flink-table-planner directly -->
+                       <scope>runtime</scope>
+                       <!-- Prevent dependency from being accessible to 
modules depending on flink-table-planner -->
+                       <optional>true</optional>
+                       <exclusions>
+                               <!-- Prevent akka and scala from being visible 
to other modules depending on flink-table-planner-loader -->
+                               <exclusion>
+                                       <groupId>*</groupId>
+                                       <artifactId>*</artifactId>
+                               </exclusion>
+                       </exclusions>
+               </dependency>
+       </dependencies>
+
+       <build>
+               <plugins>
+                       <plugin>
+                               <groupId>org.apache.maven.plugins</groupId>
+                               <artifactId>maven-jar-plugin</artifactId>
+                               <executions>
+                                       <execution>
+                                               <goals>
+                                                       <goal>test-jar</goal>
+                                               </goals>
+                                       </execution>
+                               </executions>
+                       </plugin>
+                       <plugin>
+                               <groupId>org.apache.maven.plugins</groupId>
+                               <artifactId>maven-surefire-plugin</artifactId>
+                               <configuration>

Review comment:
       Can we also add comments to this section and the next? What is it doing? 
Helpful for non-Maven experts. Non-Maven experts would even like to have a 
comment per-line almost :D

##########
File path: flink-table/flink-sql-client/pom.xml
##########
@@ -53,40 +53,10 @@ under the License.
                        <version>${project.version}</version>
                </dependency>
 
-               <dependency>
-                       <groupId>org.apache.flink</groupId>
-                       
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>

Review comment:
       We depend on `StreamExecutionEnvironment` so we still need to depend on 
`flink-streaming-java`

##########
File path: flink-table/flink-table-planner-loader/pom.xml
##########
@@ -0,0 +1,165 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0";
+                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+                xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+       <modelVersion>4.0.0</modelVersion>
+
+       <parent>
+               <groupId>org.apache.flink</groupId>
+               <artifactId>flink-table</artifactId>
+               <version>1.15-SNAPSHOT</version>
+               <relativePath>..</relativePath>
+       </parent>
+
+       <artifactId>flink-table-planner-loader</artifactId>
+       <name>Flink : Table : Planner Loader</name>
+       <packaging>jar</packaging>
+       <description>This module contains the mechanism for loading 
flink-table-planner through a separate classloader.</description>

Review comment:
       Maybe add the reason why we are doing this here as well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to