[jira] [Closed] (FLINK-34312) Improve the handling of default node types for named parameters.

2024-02-26 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34312.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Improve the handling of default node types for named parameters.
> 
>
> Key: FLINK-34312
> URL: https://issues.apache.org/jira/browse/FLINK-34312
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Currently, we have supported the use of named parameters with optional 
> arguments. 
> By adapting the interface of Calcite, we can fill in the default operator 
> when a parameter is missing. Whether it is during the validation phase or 
> when converting to SqlToRel phase, we need to handle it specially by 
> modifying the return type of DEFAULT operator based on the argument type of 
> the operator.  
> We have multiple places that need to handle the type of DEFAULT operator, 
> including SqlCallBinding, SqlOperatorBinding, and SqlToRelConverter.
> The improved solution is as follows: 
> Before SqlToRel, we can construct a DEFAULT node with a return type that 
> matches the argument type. This way, during the SqlToRel phase, there is no 
> need for special handling of the DEFAULT node's type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34312) Improve the handling of default node types for named parameters.

2024-02-26 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34312:
-

Assignee: Feng Jin

> Improve the handling of default node types for named parameters.
> 
>
> Key: FLINK-34312
> URL: https://issues.apache.org/jira/browse/FLINK-34312
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Currently, we have supported the use of named parameters with optional 
> arguments. 
> By adapting the interface of Calcite, we can fill in the default operator 
> when a parameter is missing. Whether it is during the validation phase or 
> when converting to SqlToRel phase, we need to handle it specially by 
> modifying the return type of DEFAULT operator based on the argument type of 
> the operator.  
> We have multiple places that need to handle the type of DEFAULT operator, 
> including SqlCallBinding, SqlOperatorBinding, and SqlToRelConverter.
> The improved solution is as follows: 
> Before SqlToRel, we can construct a DEFAULT node with a return type that 
> matches the argument type. This way, during the SqlToRel phase, there is no 
> need for special handling of the DEFAULT node's type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34054) FLIP-387: Support named parameters for functions and procedures

2024-02-26 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34054.
-
Resolution: Fixed

> FLIP-387: Support named parameters for functions and procedures
> ---
>
> Key: FLINK-34054
> URL: https://issues.apache.org/jira/browse/FLINK-34054
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
> Fix For: 1.19.0
>
>
> Umbrella issue for 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-387%3A+Support+named+parameters+for+functions+and+call+procedures



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34312) Improve the handling of default node types for named parameters.

2024-02-26 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820682#comment-17820682
 ] 

Shengkai Fang commented on FLINK-34312:
---

Merged into master: 1070c6e9e0f9f00991bdeb34f0757e4f0597931e

> Improve the handling of default node types for named parameters.
> 
>
> Key: FLINK-34312
> URL: https://issues.apache.org/jira/browse/FLINK-34312
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, we have supported the use of named parameters with optional 
> arguments. 
> By adapting the interface of Calcite, we can fill in the default operator 
> when a parameter is missing. Whether it is during the validation phase or 
> when converting to SqlToRel phase, we need to handle it specially by 
> modifying the return type of DEFAULT operator based on the argument type of 
> the operator.  
> We have multiple places that need to handle the type of DEFAULT operator, 
> including SqlCallBinding, SqlOperatorBinding, and SqlToRelConverter.
> The improved solution is as follows: 
> Before SqlToRel, we can construct a DEFAULT node with a return type that 
> matches the argument type. This way, during the SqlToRel phase, there is no 
> need for special handling of the DEFAULT node's type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34265) Add the doc of named parameters

2024-02-26 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820543#comment-17820543
 ] 

Shengkai Fang edited comment on FLINK-34265 at 2/26/24 11:26 AM:
-

Merged into master: 26b1d1bbff590589c72af892fc22f80fa4ee1261
Merged into release-1.19: 0af2540dc30340742506b3c61850f1e2d25f4d72


was (Author: fsk119):
Merged into master: 26b1d1bbff590589c72af892fc22f80fa4ee1261

> Add the doc of named parameters
> ---
>
> Key: FLINK-34265
> URL: https://issues.apache.org/jira/browse/FLINK-34265
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34265) Add the doc of named parameters

2024-02-26 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34265.
-
Fix Version/s: 1.19.0
   1.20.0
   Resolution: Fixed

> Add the doc of named parameters
> ---
>
> Key: FLINK-34265
> URL: https://issues.apache.org/jira/browse/FLINK-34265
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34265) Add the doc of named parameters

2024-02-25 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820543#comment-17820543
 ] 

Shengkai Fang commented on FLINK-34265:
---

Merged into master: 26b1d1bbff590589c72af892fc22f80fa4ee1261

> Add the doc of named parameters
> ---
>
> Key: FLINK-34265
> URL: https://issues.apache.org/jira/browse/FLINK-34265
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33358) Flink SQL Client fails to start in Flink on YARN

2024-01-31 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33358:
-

Assignee: Prabhu Joseph

> Flink SQL Client fails to start in Flink on YARN
> 
>
> Key: FLINK-33358
> URL: https://issues.apache.org/jira/browse/FLINK-33358
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Table SQL / Client
>Affects Versions: 1.18.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: pull-request-available
>
> Flink SQL Client fails to start in Flink on YARN with below error
> {code:java}
> flink-yarn-session -tm 2048 -s 2 -d
> /usr/lib/flink/bin/sql-client.sh 
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Could not read from command line.
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:221)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:179)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:121)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:114)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:169)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:118)
>   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:228)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:179)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.table.client.config.SqlClientOptions
>   at 
> org.apache.flink.table.client.cli.parser.SqlClientSyntaxHighlighter.highlight(SqlClientSyntaxHighlighter.java:59)
>   at 
> org.jline.reader.impl.LineReaderImpl.getHighlightedBuffer(LineReaderImpl.java:3633)
>   at 
> org.jline.reader.impl.LineReaderImpl.getDisplayedBufferWithPrompts(LineReaderImpl.java:3615)
>   at 
> org.jline.reader.impl.LineReaderImpl.redisplay(LineReaderImpl.java:3554)
>   at 
> org.jline.reader.impl.LineReaderImpl.doCleanup(LineReaderImpl.java:2340)
>   at 
> org.jline.reader.impl.LineReaderImpl.cleanup(LineReaderImpl.java:2332)
>   at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:626)
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:194)
>   ... 7 more
> {code}
> The issue is due to the old jline jar from Hadoop (3.3.3) classpath 
> (/usr/lib/hadoop-yarn/lib/jline-3.9.0.jar) taking first precedence. 
> Flink-1.18 requires jline-3.21.0.jar.
> Placing flink-sql-client.jar (bundled with jline-3.21) before the Hadoop 
> classpath fixes the issue.
> {code:java}
> diff --git a/flink-table/flink-sql-client/bin/sql-client.sh 
> b/flink-table/flink-sql-client/bin/sql-client.sh
> index 24746c5dc8..4ab8635de2 100755
> --- a/flink-table/flink-sql-client/bin/sql-client.sh
> +++ b/flink-table/flink-sql-client/bin/sql-client.sh
> @@ -89,7 +89,7 @@ if [[ "$CC_CLASSPATH" =~ .*flink-sql-client.*.jar ]]; then
>  elif [ -n "$FLINK_SQL_CLIENT_JAR" ]; then
>  
>  # start client with jar
> -exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS:$FLINK_SQL_CLIENT_JAR"`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
> +exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$FLINK_SQL_CLIENT_JAR:$INTERNAL_HADOOP_CLASSPATHS`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
>  
>  # write error message to stderr
>  else
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33358) Flink SQL Client fails to start in Flink on YARN

2024-01-31 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813031#comment-17813031
 ] 

Shengkai Fang commented on FLINK-33358:
---

Merged into master: fa364c7c668ec2a87bcbf18ce2b80f749cc16b2b

> Flink SQL Client fails to start in Flink on YARN
> 
>
> Key: FLINK-33358
> URL: https://issues.apache.org/jira/browse/FLINK-33358
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Table SQL / Client
>Affects Versions: 1.18.0
>Reporter: Prabhu Joseph
>Priority: Major
>  Labels: pull-request-available
>
> Flink SQL Client fails to start in Flink on YARN with below error
> {code:java}
> flink-yarn-session -tm 2048 -s 2 -d
> /usr/lib/flink/bin/sql-client.sh 
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Could not read from command line.
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:221)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:179)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:121)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:114)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:169)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:118)
>   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:228)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:179)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.table.client.config.SqlClientOptions
>   at 
> org.apache.flink.table.client.cli.parser.SqlClientSyntaxHighlighter.highlight(SqlClientSyntaxHighlighter.java:59)
>   at 
> org.jline.reader.impl.LineReaderImpl.getHighlightedBuffer(LineReaderImpl.java:3633)
>   at 
> org.jline.reader.impl.LineReaderImpl.getDisplayedBufferWithPrompts(LineReaderImpl.java:3615)
>   at 
> org.jline.reader.impl.LineReaderImpl.redisplay(LineReaderImpl.java:3554)
>   at 
> org.jline.reader.impl.LineReaderImpl.doCleanup(LineReaderImpl.java:2340)
>   at 
> org.jline.reader.impl.LineReaderImpl.cleanup(LineReaderImpl.java:2332)
>   at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:626)
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:194)
>   ... 7 more
> {code}
> The issue is due to the old jline jar from Hadoop (3.3.3) classpath 
> (/usr/lib/hadoop-yarn/lib/jline-3.9.0.jar) taking first precedence. 
> Flink-1.18 requires jline-3.21.0.jar.
> Placing flink-sql-client.jar (bundled with jline-3.21) before the Hadoop 
> classpath fixes the issue.
> {code:java}
> diff --git a/flink-table/flink-sql-client/bin/sql-client.sh 
> b/flink-table/flink-sql-client/bin/sql-client.sh
> index 24746c5dc8..4ab8635de2 100755
> --- a/flink-table/flink-sql-client/bin/sql-client.sh
> +++ b/flink-table/flink-sql-client/bin/sql-client.sh
> @@ -89,7 +89,7 @@ if [[ "$CC_CLASSPATH" =~ .*flink-sql-client.*.jar ]]; then
>  elif [ -n "$FLINK_SQL_CLIENT_JAR" ]; then
>  
>  # start client with jar
> -exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS:$FLINK_SQL_CLIENT_JAR"`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
> +exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$FLINK_SQL_CLIENT_JAR:$INTERNAL_HADOOP_CLASSPATHS`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
>  
>  # write error message to stderr
>  else
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33358) Flink SQL Client fails to start in Flink on YARN

2024-01-31 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33358.
-
Resolution: Fixed

> Flink SQL Client fails to start in Flink on YARN
> 
>
> Key: FLINK-33358
> URL: https://issues.apache.org/jira/browse/FLINK-33358
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Table SQL / Client
>Affects Versions: 1.18.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: pull-request-available
>
> Flink SQL Client fails to start in Flink on YARN with below error
> {code:java}
> flink-yarn-session -tm 2048 -s 2 -d
> /usr/lib/flink/bin/sql-client.sh 
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Could not read from command line.
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:221)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:179)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:121)
>   at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:114)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:169)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:118)
>   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:228)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:179)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.table.client.config.SqlClientOptions
>   at 
> org.apache.flink.table.client.cli.parser.SqlClientSyntaxHighlighter.highlight(SqlClientSyntaxHighlighter.java:59)
>   at 
> org.jline.reader.impl.LineReaderImpl.getHighlightedBuffer(LineReaderImpl.java:3633)
>   at 
> org.jline.reader.impl.LineReaderImpl.getDisplayedBufferWithPrompts(LineReaderImpl.java:3615)
>   at 
> org.jline.reader.impl.LineReaderImpl.redisplay(LineReaderImpl.java:3554)
>   at 
> org.jline.reader.impl.LineReaderImpl.doCleanup(LineReaderImpl.java:2340)
>   at 
> org.jline.reader.impl.LineReaderImpl.cleanup(LineReaderImpl.java:2332)
>   at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:626)
>   at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:194)
>   ... 7 more
> {code}
> The issue is due to the old jline jar from Hadoop (3.3.3) classpath 
> (/usr/lib/hadoop-yarn/lib/jline-3.9.0.jar) taking first precedence. 
> Flink-1.18 requires jline-3.21.0.jar.
> Placing flink-sql-client.jar (bundled with jline-3.21) before the Hadoop 
> classpath fixes the issue.
> {code:java}
> diff --git a/flink-table/flink-sql-client/bin/sql-client.sh 
> b/flink-table/flink-sql-client/bin/sql-client.sh
> index 24746c5dc8..4ab8635de2 100755
> --- a/flink-table/flink-sql-client/bin/sql-client.sh
> +++ b/flink-table/flink-sql-client/bin/sql-client.sh
> @@ -89,7 +89,7 @@ if [[ "$CC_CLASSPATH" =~ .*flink-sql-client.*.jar ]]; then
>  elif [ -n "$FLINK_SQL_CLIENT_JAR" ]; then
>  
>  # start client with jar
> -exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS:$FLINK_SQL_CLIENT_JAR"`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
> +exec "$JAVA_RUN" $FLINK_ENV_JAVA_OPTS $JVM_ARGS "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$CC_CLASSPATH:$FLINK_SQL_CLIENT_JAR:$INTERNAL_HADOOP_CLASSPATHS`" 
> org.apache.flink.table.client.SqlClient "$@" --jar "`manglePath 
> $FLINK_SQL_CLIENT_JAR`"
>  
>  # write error message to stderr
>  else
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34057) Support named parameters for functions

2024-01-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang resolved FLINK-34057.
---
Resolution: Resolved

> Support named parameters for functions
> --
>
> Key: FLINK-34057
> URL: https://issues.apache.org/jira/browse/FLINK-34057
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34056) Support named parameters for procedures

2024-01-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34056.
-
Resolution: Resolved

> Support named parameters for procedures
> ---
>
> Key: FLINK-34056
> URL: https://issues.apache.org/jira/browse/FLINK-34056
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34058) Support optional parameters for named parameters

2024-01-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34058:
-

Assignee: Feng Jin

> Support optional parameters for named parameters
> 
>
> Key: FLINK-34058
> URL: https://issues.apache.org/jira/browse/FLINK-34058
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34058) Support optional parameters for named parameters

2024-01-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34058.
-
Resolution: Implemented

> Support optional parameters for named parameters
> 
>
> Key: FLINK-34058
> URL: https://issues.apache.org/jira/browse/FLINK-34058
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34058) Support optional parameters for named parameters

2024-01-29 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811948#comment-17811948
 ] 

Shengkai Fang commented on FLINK-34058:
---

Merged into master: 4cd43fc09bd6c2e4806792fa2cce71f54ec1a9dd

> Support optional parameters for named parameters
> 
>
> Key: FLINK-34058
> URL: https://issues.apache.org/jira/browse/FLINK-34058
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33490) Validate the name conflicts when creating view

2024-01-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33490.
-
Resolution: Fixed

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34055) Introduce a new annotation for named parameters

2024-01-25 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34055.
-
Resolution: Implemented

> Introduce a new annotation for named parameters
> ---
>
> Key: FLINK-34055
> URL: https://issues.apache.org/jira/browse/FLINK-34055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Introduce a new annotation to specify the parameter name, indicate if it is 
> optional, and potentially support specifying default values in the future.
> Deprecate the argumentNames method in FunctionHints as it is not 
> user-friendly for specifying argument names with optional configuration.
>  
> {code:java}
> public @interface ArgumentHint {
> /**
>  * The name of the parameter, default is an empty string.
>  */
> String name() default "";
>  
> /**
>  * Whether the parameter is optional, default is false.
>  */
> boolean isOptional() default false;
>  
> /**
>  * The data type hint for the parameter.
>  */
> DataTypeHint type() default @DataTypeHint();
> }
> {code}
> {code:java}
> public @interface FunctionHint {
>   
> /**
>  * Deprecated attribute for specifying the names of the arguments.
>  * It is no longer recommended to use this attribute.
>  */
> @Deprecated
> String[] argumentNames() default {""};
>   
> /**
>  * Attribute for specifying the hints and additional information for 
> function arguments.
>  */
> ArgumentHint[] arguments() default {};
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34054) FLIP-387: Support named parameters for functions and procedures

2024-01-25 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34054:
-

Assignee: Feng Jin

> FLIP-387: Support named parameters for functions and procedures
> ---
>
> Key: FLINK-34054
> URL: https://issues.apache.org/jira/browse/FLINK-34054
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
> Fix For: 1.19.0
>
>
> Umbrella issue for 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-387%3A+Support+named+parameters+for+functions+and+call+procedures



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34055) Introduce a new annotation for named parameters

2024-01-25 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810863#comment-17810863
 ] 

Shengkai Fang commented on FLINK-34055:
---

Merged into master:

e1ebcaff52f423fdb54e3cb1bf8d5b3ccafc0a2f

9171194ef2647af1b55e58b98daeebabb6c84ad7

eb848dc0676d2c06adb3d16b3c6e51b378d4c57b

> Introduce a new annotation for named parameters
> ---
>
> Key: FLINK-34055
> URL: https://issues.apache.org/jira/browse/FLINK-34055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Introduce a new annotation to specify the parameter name, indicate if it is 
> optional, and potentially support specifying default values in the future.
> Deprecate the argumentNames method in FunctionHints as it is not 
> user-friendly for specifying argument names with optional configuration.
>  
> {code:java}
> public @interface ArgumentHint {
> /**
>  * The name of the parameter, default is an empty string.
>  */
> String name() default "";
>  
> /**
>  * Whether the parameter is optional, default is false.
>  */
> boolean isOptional() default false;
>  
> /**
>  * The data type hint for the parameter.
>  */
> DataTypeHint type() default @DataTypeHint();
> }
> {code}
> {code:java}
> public @interface FunctionHint {
>   
> /**
>  * Deprecated attribute for specifying the names of the arguments.
>  * It is no longer recommended to use this attribute.
>  */
> @Deprecated
> String[] argumentNames() default {""};
>   
> /**
>  * Attribute for specifying the hints and additional information for 
> function arguments.
>  */
> ArgumentHint[] arguments() default {};
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34055) Introduce a new annotation for named parameters

2024-01-25 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34055:
-

Assignee: Feng Jin

> Introduce a new annotation for named parameters
> ---
>
> Key: FLINK-34055
> URL: https://issues.apache.org/jira/browse/FLINK-34055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Introduce a new annotation to specify the parameter name, indicate if it is 
> optional, and potentially support specifying default values in the future.
> Deprecate the argumentNames method in FunctionHints as it is not 
> user-friendly for specifying argument names with optional configuration.
>  
> {code:java}
> public @interface ArgumentHint {
> /**
>  * The name of the parameter, default is an empty string.
>  */
> String name() default "";
>  
> /**
>  * Whether the parameter is optional, default is false.
>  */
> boolean isOptional() default false;
>  
> /**
>  * The data type hint for the parameter.
>  */
> DataTypeHint type() default @DataTypeHint();
> }
> {code}
> {code:java}
> public @interface FunctionHint {
>   
> /**
>  * Deprecated attribute for specifying the names of the arguments.
>  * It is no longer recommended to use this attribute.
>  */
> @Deprecated
> String[] argumentNames() default {""};
>   
> /**
>  * Attribute for specifying the hints and additional information for 
> function arguments.
>  */
> ArgumentHint[] arguments() default {};
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33760) Group Window agg has different result when only consuming -D records while using or not using minibatch

2024-01-23 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33760:
-

Assignee: Yunhong Zheng

> Group Window agg has different result when only consuming -D records while 
> using or not using minibatch
> ---
>
> Key: FLINK-33760
> URL: https://issues.apache.org/jira/browse/FLINK-33760
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>
> Add the test in AggregateITCase to re-produce this bug.
>  
> {code:java}
> @Test
> def test(): Unit = {
>   val upsertSourceCurrencyData = List(
> changelogRow("-D", 1.bigDecimal, "a"),
> changelogRow("-D", 1.bigDecimal, "b"),
> changelogRow("-D", 1.bigDecimal, "b")
>   )
>   val upsertSourceDataId = registerData(upsertSourceCurrencyData);
>   tEnv.executeSql(s"""
>  |CREATE TABLE T (
>  | `a` DECIMAL(32, 8),
>  | `d` STRING,
>  | proctime as proctime()
>  |) WITH (
>  | 'connector' = 'values',
>  | 'data-id' = '$upsertSourceDataId',
>  | 'changelog-mode' = 'I,UA,UB,D',
>  | 'failing-source' = 'true'
>  |)
>  |""".stripMargin)
>   val sql =
> "SELECT max(a), sum(a), min(a), TUMBLE_START(proctime, INTERVAL '0.005' 
> SECOND), TUMBLE_END(proctime, INTERVAL '0.005' SECOND), d FROM T GROUP BY d, 
> TUMBLE(proctime, INTERVAL '0.005' SECOND)"
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink)
>   env.execute()
>   // Use the result precision/scale calculated for sum and don't override 
> with the one calculated
>   // for plus()/minus(), which results in loosing a decimal digit.
>   val expected = 
> List("6.41671935,65947.230719357070,609.0286740370369970")
>   assertEquals(expected, sink.getRetractResults.sorted)
> } {code}
> When MiniBatch is ON, the result is `List()`.
>  
> When MiniBatch is OFF, the result is 
> `List(null,-1.,null,2023-12-06T11:29:21.895,2023-12-06T11:29:21.900,a)`.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34155) Recurring SqlExecutionException

2024-01-22 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-34155:
--
Priority: Major  (was: Blocker)

> Recurring SqlExecutionException
> ---
>
> Key: FLINK-34155
> URL: https://issues.apache.org/jira/browse/FLINK-34155
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Jeyhun Karimov
>Priority: Major
>  Labels: test
> Attachments: disk-full.log
>
>
> When analyzing very big maven log file in our CI system, I found out that 
> there is a recurring {{{}SqlException (subset of the log file is 
> attached){}}}:
>  
> {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only 
> 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' 
> statement to submit Statement Set.}}
>  
>  
> which leads to:
>  
> {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34155) Recurring SqlExecutionException

2024-01-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809713#comment-17809713
 ] 

Shengkai Fang commented on FLINK-34155:
---

Hi [~jeyhunkarimov]. 

The exception mentioned in the text are due to the addition of exception cases 
in our testing. The same exceptions occurring in multiple places:

1. During the process of submitting SQL, an illegal query is detected. The SQL 
gateway currently logs this exception;
2. When users directly fetch the results of the exceptional SQL, the logging 
system will also record the same exception. 

I see that there is some redundancy in the exception information here, but I 
believe it does not affect the stability of the tests, at most it may affect 
future troubleshooting. This test hasn't been modified for a long time, if 
there were any issues, the CI tests would have become unstable a while ago.
 
 

> Recurring SqlExecutionException
> ---
>
> Key: FLINK-34155
> URL: https://issues.apache.org/jira/browse/FLINK-34155
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Jeyhun Karimov
>Priority: Blocker
>  Labels: test
> Attachments: disk-full.log
>
>
> When analyzing very big maven log file in our CI system, I found out that 
> there is a recurring {{{}SqlException (subset of the log file is 
> attached){}}}:
>  
> {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only 
> 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' 
> statement to submit Statement Set.}}
>  
>  
> which leads to:
>  
> {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34155) Recurring SqlExecutionException

2024-01-22 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-34155:
--
Affects Version/s: 1.19.0
   (was: 1.8.0)

> Recurring SqlExecutionException
> ---
>
> Key: FLINK-34155
> URL: https://issues.apache.org/jira/browse/FLINK-34155
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Jeyhun Karimov
>Priority: Blocker
>  Labels: test
> Attachments: disk-full.log
>
>
> When analyzing very big maven log file in our CI system, I found out that 
> there is a recurring {{{}SqlException (subset of the log file is 
> attached){}}}:
>  
> {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only 
> 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' 
> statement to submit Statement Set.}}
>  
>  
> which leads to:
>  
> {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34049) Refactor classes related to window TVF aggregation to prepare for non-aligned windows

2024-01-21 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-34049.
-
Resolution: Implemented

> Refactor classes related to window TVF aggregation to prepare for non-aligned 
> windows
> -
>
> Key: FLINK-34049
> URL: https://issues.apache.org/jira/browse/FLINK-34049
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> Refactor classes related to window TVF aggregation such as 
> AbstractWindowAggProcessor to prepare for the implementation of non-aligned 
> windows like session window



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34049) Refactor classes related to window TVF aggregation to prepare for non-aligned windows

2024-01-21 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809213#comment-17809213
 ] 

Shengkai Fang commented on FLINK-34049:
---

Merged into master: e345ffb453ac482d0250736b687cf88b85c606b0

> Refactor classes related to window TVF aggregation to prepare for non-aligned 
> windows
> -
>
> Key: FLINK-34049
> URL: https://issues.apache.org/jira/browse/FLINK-34049
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> Refactor classes related to window TVF aggregation such as 
> AbstractWindowAggProcessor to prepare for the implementation of non-aligned 
> windows like session window



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33928) Should not throw exception while creating view with specify field names even if the query conflicts in field names

2024-01-18 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33928.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Should not throw exception while creating view with specify field names even 
> if the query conflicts in field names
> --
>
> Key: FLINK-33928
> URL: https://issues.apache.org/jira/browse/FLINK-33928
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The following sql should be valid.
> {code:java}
> create view view1(a, b) as select t1.name, t2.name from t1 join t1 t2 on 
> t1.score = t2.score; {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33928) Should not throw exception while creating view with specify field names even if the query conflicts in field names

2024-01-18 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808447#comment-17808447
 ] 

Shengkai Fang commented on FLINK-33928:
---

Merged into master: 82fcdfe5634fb82d3ab4a183818d852119dc68a9

> Should not throw exception while creating view with specify field names even 
> if the query conflicts in field names
> --
>
> Key: FLINK-33928
> URL: https://issues.apache.org/jira/browse/FLINK-33928
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
>
> The following sql should be valid.
> {code:java}
> create view view1(a, b) as select t1.name, t2.name from t1 join t1 t2 on 
> t1.score = t2.score; {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34140) Rename WindowContext and TriggerContext in window

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34140:
-

Assignee: xuyang

> Rename WindowContext and TriggerContext in window
> -
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Currently, WindowContext and TriggerContext not only contains a series of get 
> methods to obtain context information, but also includes behaviors such as 
> clear.
> Maybe it's better to rename them as WindowDelegator and TriggerDelegator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34139) The slice assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34139:
-

Assignee: xuyang

> The slice assigner should not reveal its event time or process time at the 
> interface level.
> ---
>
> Key: FLINK-34139
> URL: https://issues.apache.org/jira/browse/FLINK-34139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Currently, there is a function `boolean isEventTime()` to tell other that it 
> is by event time or process time. However, as an assigner, it should not 
> expose this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34138) Improve the interface about MergeCallback in window

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34138:
-

Assignee: xuyang

> Improve the interface about MergeCallback in window
> ---
>
> Key: FLINK-34138
> URL: https://issues.apache.org/jira/browse/FLINK-34138
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> As a merge method, the return value type is `void`, that is confusing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34066) LagFunction throw NPE when input argument are not null

2024-01-16 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34066:
-

Assignee: Yunhong Zheng

> LagFunction throw NPE when input argument are not null
> --
>
> Key: FLINK-34066
> URL: https://issues.apache.org/jira/browse/FLINK-34066
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> This issue is related to https://issues.apache.org/jira/browse/FLINK-31967. 
> In FLINK-31967, the NPE error has not been thoroughly fixed. If the select 
> value  LAG(len, 1, cast(null as int)) and  LAG(len, 1, 1) exists together in 
> test case AggregateITCase.testLagAggFunction() as:
> {code:java}
> val sql =
>   s"""
>  |select
>  |  LAG(len, 1, cast(null as int)) OVER w AS nullable_prev_quantity,
>  |  LAG(len, 1, 1) OVER w AS prev_quantity,
>  |  LAG(len) OVER w AS prev_quantity
>  |from src
>  |WINDOW w AS (ORDER BY proctime)
>  |""".stripMargin {code}
> before is:
> {code:java}
> val sql =
>   s"""
>  |select
>  |  LAG(len, 1, cast(null as int)) OVER w AS prev_quantity,
>  |  LAG(len) OVER w AS prev_quantity
>  |from src
>  |WINDOW w AS (ORDER BY proctime)
>  |""".stripMargin {code}
> NPE will throw.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34066) LagFunction throw NPE when input argument are not null

2024-01-16 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807206#comment-17807206
 ] 

Shengkai Fang commented on FLINK-34066:
---

Merged into master: 488d60a1d3912246130b1fb7d238a0e7b336516f

> LagFunction throw NPE when input argument are not null
> --
>
> Key: FLINK-34066
> URL: https://issues.apache.org/jira/browse/FLINK-34066
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> This issue is related to https://issues.apache.org/jira/browse/FLINK-31967. 
> In FLINK-31967, the NPE error has not been thoroughly fixed. If the select 
> value  LAG(len, 1, cast(null as int)) and  LAG(len, 1, 1) exists together in 
> test case AggregateITCase.testLagAggFunction() as:
> {code:java}
> val sql =
>   s"""
>  |select
>  |  LAG(len, 1, cast(null as int)) OVER w AS nullable_prev_quantity,
>  |  LAG(len, 1, 1) OVER w AS prev_quantity,
>  |  LAG(len) OVER w AS prev_quantity
>  |from src
>  |WINDOW w AS (ORDER BY proctime)
>  |""".stripMargin {code}
> before is:
> {code:java}
> val sql =
>   s"""
>  |select
>  |  LAG(len, 1, cast(null as int)) OVER w AS prev_quantity,
>  |  LAG(len) OVER w AS prev_quantity
>  |from src
>  |WINDOW w AS (ORDER BY proctime)
>  |""".stripMargin {code}
> NPE will throw.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33928) Should not throw exception while creating view with specify field names even if the query conflicts in field names

2024-01-12 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33928:
-

Assignee: Yunhong Zheng

> Should not throw exception while creating view with specify field names even 
> if the query conflicts in field names
> --
>
> Key: FLINK-33928
> URL: https://issues.apache.org/jira/browse/FLINK-33928
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>
> The following sql should be valid.
> {code:java}
> create view view1(a, b) as select t1.name, t2.name from t1 join t1 t2 on 
> t1.score = t2.score; {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33490) Validate the name conflicts when creating view

2024-01-03 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802490#comment-17802490
 ] 

Shengkai Fang edited comment on FLINK-33490 at 1/4/24 7:28 AM:
---

Hi, [~martijnvisser]. Thanks a lot for your patience and response.

To be honest, I must admit that this change will affect a small portion of 
users. As shown in the examples in the table above, if there are columns with 
the same name in a table, an error will be thrown, but we have deliberately 
refined the error message to show the location of the conflicting column names 
as clearly as possible. In fact, we scanned the impact of this fix on our 
internal use and found that less than 3% of jobs were affected. Moreover, this 
fix has been merged into the commercial branch for over a month now, and to 
date, we have not received any user feedback on this issue. Therefore, I 
believe the impact of this change is manageable.

I agree with you that FLINK-33740 is not needed; it would be better to modify 
this ticket to be about document improvements. cc [~xuyangzhong] 


was (Author: fsk119):
Hi, [~martijnvisser]. Thanks a lot for your patience and response.

 

To be honest, I must admit that this change will affect a small portion of 
users. As shown in the examples in the table above, if there are columns with 
the same name in a table, an error will be thrown, but we have deliberately 
refined the error message to show the location of the conflicting column names 
as clearly as possible. In fact, we scanned the impact of this fix on our 
internal use and found that less than 0.5% of jobs were affected. Moreover, 
this fix has been merged into the commercial branch for over a month now, and 
to date, we have not received any user feedback on this issue. Therefore, I 
believe the impact of this change is manageable.

 

I agree with you that FLINK-33740 is not needed; it would be better to modify 
this ticket to be about document improvements. cc [~xuyangzhong] 

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33490) Validate the name conflicts when creating view

2024-01-03 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802490#comment-17802490
 ] 

Shengkai Fang commented on FLINK-33490:
---

Hi, [~martijnvisser]. Thanks a lot for your patience and response.

 

To be honest, I must admit that this change will affect a small portion of 
users. As shown in the examples in the table above, if there are columns with 
the same name in a table, an error will be thrown, but we have deliberately 
refined the error message to show the location of the conflicting column names 
as clearly as possible. In fact, we scanned the impact of this fix on our 
internal use and found that less than 0.5% of jobs were affected. Moreover, 
this fix has been merged into the commercial branch for over a month now, and 
to date, we have not received any user feedback on this issue. Therefore, I 
believe the impact of this change is manageable.

 

I agree with you that FLINK-33740 is not needed; it would be better to modify 
this ticket to be about document improvements. cc [~xuyangzhong] 

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-33490) Validate the name conflicts when creating view

2024-01-03 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reopened FLINK-33490:
---

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33490) Validate the name conflicts when creating view

2024-01-03 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802136#comment-17802136
 ] 

Shengkai Fang commented on FLINK-33490:
---

Hi, [~martijnvisser]. I think this issue is just a bug fix rather than feature 
and the scope of the fix is clear(in previous discussions, this fix is just 
about aligning with the SQL standard and the behavior of other databases). 
Introducing a FLIP to fix a bug would not only consume a lot of our time, but 
it would also make the whole fixing process very lengthy. 

I think [~libenchao] 's proposal is just to sort out the current behavior 
supported by Flink SQL, to make Flink SQL easier for users to understand and 
more in line with SQL standards. However, this does not affect our bug fix, 
because the purpose of this bug fix itself is to make Flink SQL more standard.

WDYT? cc [~lincoln.86xy] [~xuyangzhong] [~libenchao] 

 

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33446) SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation doesn't produce the correct plan

2024-01-03 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802129#comment-17802129
 ] 

Shengkai Fang commented on FLINK-33446:
---

?? {{Sql2RelConverter}} generates the correct plan??

Yes I think you are right. I think it's calcite's bug and we need to upgrade 
calcite version.

> SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation
>  doesn't produce the correct plan
> ---
>
> Key: FLINK-33446
> URL: https://issues.apache.org/jira/browse/FLINK-33446
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Shengkai Fang
>Priority: Major
>
> Although this test doesn't throw an exception, the final plan produces 3 
> columns rather than 2 after optimization.
> {code:java}
> LogicalProject(inputs=[0..1], exprs=[[$4]])
> +- LogicalFilter(condition=[IS NULL($5)])
>+- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
>   :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
>   :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
> source: [TestTableSource(a, b, c)]]])
>   +- LogicalProject(inputs=[0..2], exprs=[[true]])
>  +- LogicalAggregate(group=[{0, 1, 2}])
> +- LogicalProject(inputs=[0..2])
>+- LogicalFilter(condition=[IS NULL($3)])
>   +- LogicalJoin(condition=[true], joinType=[left])
>  :- LogicalFilter(condition=[IS NOT NULL($0)])
>  :  +- LogicalProject(exprs=[[+($0, 1)]])
>  : +- LogicalTableScan(table=[[default_catalog, 
> default_database, r, source: [TestTableSource(d, e, f)]]])
>  +- LogicalProject(inputs=[0..1], exprs=[[true]])
> +- LogicalAggregate(group=[{0, 1}])
>+- LogicalProject(exprs=[[$3, $0]])
>   +- LogicalFilter(condition=[AND(=($1, $0), 
> =(CAST($2):BIGINT, $3))])
>  +- LogicalProject(exprs=[[+($0, 4), +($0, 
> 5), +($0, 6), CAST(+($0, 6)):BIGINT]])
> +- 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [TestTableSource(i, j, k)]]])
> {code}
> After digging, I think it's the SubQueryRemoveRule doesn't generate the 
> Correlate but generates the Join node, which causes the failure of the 
> decorrelation. For a quick fix, I think we should throw an exception to 
> notify users it's not a supported feature in the Flink. 
> There might exist 2 ways to fix this issue:
> 1. Expand subquery when converting SQL to rel.  After experimenting with 
> calcite, I found that the Sql2RelConverter generates the correct plan.
> {code:java}
> LogicalProject(inputs=[0..1])
> +- LogicalFilter(condition=[IS NULL($2)])
>+- LogicalCorrelate(correlation=[$cor7], joinType=[left], 
> requiredColumns=[{0, 1}])
>   :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
>   :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
> source: [TestTableSource(a, b, c)]]])
>   +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
>  +- LogicalProject(exprs=[[true]])
> +- LogicalFilter(condition=[AND(=($0, $cor7.d2), IS NULL($1))])
>+- LogicalCorrelate(correlation=[$cor4], joinType=[left], 
> requiredColumns=[{0}])
>   :- LogicalProject(inputs=[0])
>   :  +- LogicalTableScan(table=[[default_catalog, 
> default_database, r, source: [TestTableSource(d1, e, f)]]])
>   +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
>  +- LogicalProject(exprs=[[true]])
> +- LogicalFilter(condition=[AND(=($0, $cor4.d1), 
> =($1, $cor4.d1), =(CAST($2):BIGINT, $cor7.d3))])
>+- LogicalProject(exprs=[[+($0, 4), +($0, 5), 
> +($0, 6)]])
>   +- LogicalTableScan(table=[[default_catalog, 
> default_database, t, source: [TestTableSource(i, j, k)]]])
> {code}
> You can find the new plan uses a correlate node rather than a join node.
> 2. CALCITE-5789 has fix this problem by removing the nested correlation node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33490) Validate the name conflicts when creating view

2023-12-20 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33490.
-
Resolution: Fixed

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33490) Validate the name conflicts when creating view

2023-12-20 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799240#comment-17799240
 ] 

Shengkai Fang commented on FLINK-33490:
---

Merged into master: a4fe01cb9d678b293107b0a6278fec1a749913cc

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33672) Use MapState.entries() instead of keys() and get() in over window

2023-12-11 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33672.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Use MapState.entries() instead of keys() and get() in over window
> -
>
> Key: FLINK-33672
> URL: https://issues.apache.org/jira/browse/FLINK-33672
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> In code logic related with over windows, such as 
> org.apache.flink.table.runtime.operators.over.ProcTimeRangeBoundedPrecedingFunction
> {code:java}
> private transient MapState> inputState;
> public void onTimer(
> long timestamp,
> KeyedProcessFunction.OnTimerContext ctx,
> Collector out)
> throws Exception {
> //...
> Iterator iter = inputState.keys().iterator();
> //...
> while (iter.hasNext()) {
> Long elementKey = iter.next();
> if (elementKey < limit) {
> // element key outside of window. Retract values
> List elementsRemove = inputState.get(elementKey);
> // ...
> }
> }
> //...
> } {code}
> As we can see, there is a combination of key iteration and get the value for 
> iterated key from inputState. However for RocksDB, the key iteration calls 
> entry iteration, which means actually we could replace it by entry iteration 
> without introducing any extra overhead. And as a result, we could save a 
> function call of get() by using getValue() of iterated entry at very low cost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33672) Use MapState.entries() instead of keys() and get() in over window

2023-12-11 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17795569#comment-17795569
 ] 

Shengkai Fang commented on FLINK-33672:
---

Merged into master: 080119cca53d9890257982b6a74a7d6f913253c2

> Use MapState.entries() instead of keys() and get() in over window
> -
>
> Key: FLINK-33672
> URL: https://issues.apache.org/jira/browse/FLINK-33672
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> In code logic related with over windows, such as 
> org.apache.flink.table.runtime.operators.over.ProcTimeRangeBoundedPrecedingFunction
> {code:java}
> private transient MapState> inputState;
> public void onTimer(
> long timestamp,
> KeyedProcessFunction.OnTimerContext ctx,
> Collector out)
> throws Exception {
> //...
> Iterator iter = inputState.keys().iterator();
> //...
> while (iter.hasNext()) {
> Long elementKey = iter.next();
> if (elementKey < limit) {
> // element key outside of window. Retract values
> List elementsRemove = inputState.get(elementKey);
> // ...
> }
> }
> //...
> } {code}
> As we can see, there is a combination of key iteration and get the value for 
> iterated key from inputState. However for RocksDB, the key iteration calls 
> entry iteration, which means actually we could replace it by entry iteration 
> without introducing any extra overhead. And as a result, we could save a 
> function call of get() by using getValue() of iterated entry at very low cost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33691) Support agg push down for 'count(*)/count(1)/count(column not null)'

2023-12-06 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793592#comment-17793592
 ] 

Shengkai Fang commented on FLINK-33691:
---

Merged into master: 3060ccd49cc8d19634b431dbf0f09ac875d0d422

> Support agg push down for 'count(*)/count(1)/count(column not null)'
> 
>
> Key: FLINK-33691
> URL: https://issues.apache.org/jira/browse/FLINK-33691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Now,  PushLocalAggIntoScanRule cannot push down 'count( * 
> )/count(1)/count(column not null)', but it can push down count(column 
> nullable). The reason is that count( * ) and count( 1 ) will be optimized to 
> a scan with calc as '0 AS $f0' to reduce read cost, which will not match the 
> push down rule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33691) Support agg push down for 'count(*)/count(1)/count(column not null)'

2023-12-06 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33691.
-
Resolution: Implemented

> Support agg push down for 'count(*)/count(1)/count(column not null)'
> 
>
> Key: FLINK-33691
> URL: https://issues.apache.org/jira/browse/FLINK-33691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Now,  PushLocalAggIntoScanRule cannot push down 'count( * 
> )/count(1)/count(column not null)', but it can push down count(column 
> nullable). The reason is that count( * ) and count( 1 ) will be optimized to 
> a scan with calc as '0 AS $f0' to reduce read cost, which will not match the 
> push down rule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33691) Support agg push down for 'count(*)/count(1)/count(column not null)'

2023-11-29 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33691:
-

Assignee: Yunhong Zheng

> Support agg push down for 'count(*)/count(1)/count(column not null)'
> 
>
> Key: FLINK-33691
> URL: https://issues.apache.org/jira/browse/FLINK-33691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
> Fix For: 1.19.0
>
>
> Now,  PushLocalAggIntoScanRule cannot push down 'count( * 
> )/count(1)/count(column not null)', but it can push down count(column 
> nullable). The reason is that count( * ) and count( 1 ) will be optimized to 
> a scan with calc as '0 AS $f0' to reduce read cost, which will not match the 
> push down rule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-11-28 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790865#comment-17790865
 ] 

Shengkai Fang commented on FLINK-33644:
---

Hi [~dwysakowicz]. I am confused that how can we get the SQL from the 
ModifyOperation? Why this FLIP only involves QueryOperation?

> FLIP-393: Make QueryOperations SQL serializable
> ---
>
> Key: FLINK-33644
> URL: https://issues.apache.org/jira/browse/FLINK-33644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33490) Validate the name conflicts when creating view

2023-11-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788984#comment-17788984
 ] 

Shengkai Fang commented on FLINK-33490:
---

+1 to clarify the behaviour. 

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33490) Validate the name conflicts when creating view

2023-11-15 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33490:
-

Assignee: xuyang

> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: xuyang
>Priority: Major
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33489) LISTAGG with generating partial-final agg will cause wrong result

2023-11-15 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786323#comment-17786323
 ] 

Shengkai Fang commented on FLINK-33489:
---

Merged into master: 7e30e5e9fcd51382f48d48c9848bb8df14293e22

> LISTAGG with generating partial-final agg will cause wrong result
> -
>
> Key: FLINK-33489
> URL: https://issues.apache.org/jira/browse/FLINK-33489
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 
> 1.16.0, 1.17.0, 1.18.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> Adding the following test cases in SplitAggregateITCase will reproduce this 
> bug:
>  
> {code:java}
> // code placeholder
> @Test
> def testListAggWithDistinctMultiArgs(): Unit = {
>   val t1 = tEnv.sqlQuery(s"""
> |SELECT
> |  a,
> |  LISTAGG(DISTINCT c, '#')
> |FROM T
> |GROUP BY a
>  """.stripMargin)
>   val sink = new TestingRetractSink
>   t1.toRetractStream[Row].addSink(sink)
>   env.execute()
>   val expected = Map[String, List[String]](
> "1" -> List("Hello 0", "Hello 1"),
> "2" -> List("Hello 0", "Hello 1", "Hello 2", "Hello 3", "Hello 4"),
> "3" -> List("Hello 0", "Hello 1"),
> "4" -> List("Hello 1", "Hello 2", "Hello 3")
>   )
>   val actualData = sink.getRetractResults.sorted
>   println(actualData)
> } {code}
> The `actualData` is `List(1,Hello 0,Hello 1, 2,Hello 2,Hello 4,Hello 3,Hello 
> 1,Hello 0, 3,Hello 1,Hello 0, 4,Hello 2,Hello 3,Hello 1)`, and the delimiter 
> `#` will be ignored.
> Let's take its plan:
> {code:java}
> // code placeholder
> LegacySink(name=[DataStreamTableSink], fields=[a, EXPR$1])
> +- GroupAggregate(groupBy=[a], partialFinalType=[FINAL], select=[a, 
> LISTAGG_RETRACT($f3_0) AS $f1])
>    +- Exchange(distribution=[hash[a]])
>       +- GroupAggregate(groupBy=[a, $f3, $f4], partialFinalType=[PARTIAL], 
> select=[a, $f3, $f4, LISTAGG(DISTINCT c, $f2) AS $f3_0])
>          +- Exchange(distribution=[hash[a, $f3, $f4]])
>             +- Calc(select=[a, c, _UTF-16LE'#' AS $f2, MOD(HASH_CODE(c), 
> 1024) AS $f3, MOD(HASH_CODE(_UTF-16LE'#'), 1024) AS $f4])
>                +- MiniBatchAssigner(interval=[1000ms], mode=[ProcTime])
>                   +- DataStreamScan(table=[[default_catalog, 
> default_database, T]], fields=[a, b, c]) {code}
> The final `GroupAggregate` missing the delimiter args, and the default 
> delimiter `,` will be used.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33251) SQL Client query execution aborts after a few seconds: ConnectTimeoutException

2023-11-13 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785738#comment-17785738
 ] 

Shengkai Fang commented on FLINK-33251:
---

I can not reproduce this problem in my local machine with the steps above.
{code:java}
Darwin B-QB5MMD6M-0305.local 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 
22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64
{code}

Could you modify the log level of  FLINK_HOME/conf/log4j-cli.properties to 
TRACE and then upload the log file?


{code:java}
rootLogger.level = TRACE
{code}


> SQL Client query execution aborts after a few seconds: ConnectTimeoutException
> --
>
> Key: FLINK-33251
> URL: https://issues.apache.org/jira/browse/FLINK-33251
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.18.0, 1.17.1
> Environment: Macbook Pro 
> Apple M1 Max
>  
> {code:java}
> $ uname -a
> Darwin asgard08 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 
> 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000 arm64
> {code}
> {code:bash}
> $ java --version
> openjdk 11.0.20.1 2023-08-24
> OpenJDK Runtime Environment Homebrew (build 11.0.20.1+0)
> OpenJDK 64-Bit Server VM Homebrew (build 11.0.20.1+0, mixed mode)
> $ mvn --version
> Apache Maven 3.9.5 (57804ffe001d7215b5e7bcb531cf83df38f93546)
> Maven home: /opt/homebrew/Cellar/maven/3.9.5/libexec
> Java version: 11.0.20.1, vendor: Homebrew, runtime: 
> /opt/homebrew/Cellar/openjdk@11/11.0.20.1/libexec/openjdk.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "14.0", arch: "aarch64", family: "mac"
> {code}
>Reporter: Robin Moffatt
>Priority: Major
> Attachments: log.zip
>
>
> If I run a streaming query from an unbounded connector from the SQL Client, 
> it bombs out after ~15 seconds.
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:52596
> {code}
> This *doesn't* happen on 1.16.2. It *does* happen on *1.17.1* and *1.18* that 
> I have just built locally (git repo hash `9b837727b6d`). 
> The corresponding task's status in the Web UI shows as `CANCELED`. 
> ---
> h2. To reproduce
> Launch local cluster and SQL client
> {code}
> ➜  flink-1.18-SNAPSHOT ./bin/start-cluster.sh 
> Starting cluster.
> Starting standalonesession daemon on host asgard08.
> Starting taskexecutor daemon on host asgard08.
> ➜  flink-1.18-SNAPSHOT ./bin/sql-client.sh
> […]
> Flink SQL>
> {code}
> Set streaming mode and result mode
> {code:sql}
> Flink SQL> SET 'execution.runtime-mode' = 'STREAMING';
> [INFO] Execute statement succeed.
> Flink SQL> SET 'sql-client.execution.result-mode' = 'changelog';
> [INFO] Execute statement succeed.
> {code}
> Define a table to read data from CSV files in a folder
> {code:sql}
> CREATE TABLE firewall (
>   event_time STRING,
>   source_ip  STRING,
>   dest_ipSTRING,
>   source_prt INT,
>   dest_prt   INT
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = 'file:///tmp/firewall/',
>   'format' = 'csv',
>   'source.monitor-interval' = '1' -- unclear from the docs what the unit is 
> here
> );
> {code}
> Create a CSV file to read in
> {code:bash}
> $ mkdir /tmp/firewall
> $ cat > /tmp/firewall/data.csv < 2018-05-11 00:19:34,151.35.34.162,125.26.20.222,2014,68
> 2018-05-11 22:20:43,114.24.126.190,21.68.21.69,379,1619
> EOF
> {code}
> Run a streaming query 
> {code}
> SELECT * FROM firewall;
> {code}
> You will get results showing (and if you add another data file it will show 
> up) - but after ~30 seconds the query aborts and throws an error back to the 
> user at the SQL Client prompt
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:58470
> Flink SQL>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33491) Support json column validated

2023-11-09 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33491:
--
Fix Version/s: 1.19.0
   (was: 1.8.4)
   (was: 1.9.4)

> Support json column validated
> -
>
> Key: FLINK-33491
> URL: https://issues.apache.org/jira/browse/FLINK-33491
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: ouyangwulin
>Assignee: ouyangwulin
>Priority: Minor
> Fix For: 1.19.0
>
>
> Just like the {{is_valid_json}} function in PostgreSQL, it would be useful to 
> have an inbuilt function to check whether a string conforms to the JSON 
> specification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33491) Support json column validated

2023-11-09 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33491:
-

Assignee: ouyangwulin

> Support json column validated
> -
>
> Key: FLINK-33491
> URL: https://issues.apache.org/jira/browse/FLINK-33491
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.8.4, 1.9.4
>Reporter: ouyangwulin
>Assignee: ouyangwulin
>Priority: Minor
> Fix For: 1.8.4, 1.9.4
>
>
> Just like the {{is_valid_json}} function in PostgreSQL, it would be useful to 
> have an inbuilt function to check whether a string conforms to the JSON 
> specification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33491) Support json column validated

2023-11-09 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33491:
--
Affects Version/s: 1.19.0
   (was: 1.8.4)
   (was: 1.9.4)

> Support json column validated
> -
>
> Key: FLINK-33491
> URL: https://issues.apache.org/jira/browse/FLINK-33491
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: ouyangwulin
>Assignee: ouyangwulin
>Priority: Minor
> Fix For: 1.8.4, 1.9.4
>
>
> Just like the {{is_valid_json}} function in PostgreSQL, it would be useful to 
> have an inbuilt function to check whether a string conforms to the JSON 
> specification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33490) Validate the name conflicts when creating view

2023-11-08 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33490:
--
Description: 
We should forbid 

```
CREATE VIEW id_view AS
SELECT id, uid AS id FROM id_table
```

As the SQL standards states,

If  is specified, then:
i) If any two columns in the table specified by the  have 
equivalent s, or if any column of that table has an 
implementation-dependent name, then a  shall be specified.
ii) Equivalent s shall not be specified more than once in the 
.

Many databases also throw exception when view name conflicts, e.g. mysql, 
postgres.



  was:
When should forbid 

```
CREATE VIEW id_view AS
SELECT id, uid AS id FROM id_table
```

As the SQL standards states,

If  is specified, then:
i) If any two columns in the table specified by the  have 
equivalent s, or if any column of that table has an 
implementation-dependent name, then a  shall be specified.
ii) Equivalent s shall not be specified more than once in the 
.

Many databases also throw exception when view name conflicts, e.g. mysql, 
postgres.




> Validate the name conflicts when creating view
> --
>
> Key: FLINK-33490
> URL: https://issues.apache.org/jira/browse/FLINK-33490
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Priority: Major
>
> We should forbid 
> ```
> CREATE VIEW id_view AS
> SELECT id, uid AS id FROM id_table
> ```
> As the SQL standards states,
> If  is specified, then:
> i) If any two columns in the table specified by the  have 
> equivalent s, or if any column of that table has an 
> implementation-dependent name, then a  shall be specified.
> ii) Equivalent s shall not be specified more than once in the 
> .
> Many databases also throw exception when view name conflicts, e.g. mysql, 
> postgres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33490) Validate the name conflicts when creating view

2023-11-08 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-33490:
-

 Summary: Validate the name conflicts when creating view
 Key: FLINK-33490
 URL: https://issues.apache.org/jira/browse/FLINK-33490
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Shengkai Fang


When should forbid 

```
CREATE VIEW id_view AS
SELECT id, uid AS id FROM id_table
```

As the SQL standards states,

If  is specified, then:
i) If any two columns in the table specified by the  have 
equivalent s, or if any column of that table has an 
implementation-dependent name, then a  shall be specified.
ii) Equivalent s shall not be specified more than once in the 
.

Many databases also throw exception when view name conflicts, e.g. mysql, 
postgres.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33489) LISTAGG with generating partial-final agg will cause wrong result

2023-11-08 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33489:
-

Assignee: xuyang

> LISTAGG with generating partial-final agg will cause wrong result
> -
>
> Key: FLINK-33489
> URL: https://issues.apache.org/jira/browse/FLINK-33489
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 
> 1.16.0, 1.17.0, 1.18.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Adding the following test cases in SplitAggregateITCase will reproduce this 
> bug:
>  
> {code:java}
> // code placeholder
> @Test
> def testListAggWithDistinctMultiArgs(): Unit = {
>   val t1 = tEnv.sqlQuery(s"""
> |SELECT
> |  a,
> |  LISTAGG(DISTINCT c, '#')
> |FROM T
> |GROUP BY a
>  """.stripMargin)
>   val sink = new TestingRetractSink
>   t1.toRetractStream[Row].addSink(sink)
>   env.execute()
>   val expected = Map[String, List[String]](
> "1" -> List("Hello 0", "Hello 1"),
> "2" -> List("Hello 0", "Hello 1", "Hello 2", "Hello 3", "Hello 4"),
> "3" -> List("Hello 0", "Hello 1"),
> "4" -> List("Hello 1", "Hello 2", "Hello 3")
>   )
>   val actualData = sink.getRetractResults.sorted
>   println(actualData)
> } {code}
> The `actualData` is `List(1,Hello 0,Hello 1, 2,Hello 2,Hello 4,Hello 3,Hello 
> 1,Hello 0, 3,Hello 1,Hello 0, 4,Hello 2,Hello 3,Hello 1)`, and the delimiter 
> `#` will be ignored.
> Let's take its plan:
> {code:java}
> // code placeholder
> LegacySink(name=[DataStreamTableSink], fields=[a, EXPR$1])
> +- GroupAggregate(groupBy=[a], partialFinalType=[FINAL], select=[a, 
> LISTAGG_RETRACT($f3_0) AS $f1])
>    +- Exchange(distribution=[hash[a]])
>       +- GroupAggregate(groupBy=[a, $f3, $f4], partialFinalType=[PARTIAL], 
> select=[a, $f3, $f4, LISTAGG(DISTINCT c, $f2) AS $f3_0])
>          +- Exchange(distribution=[hash[a, $f3, $f4]])
>             +- Calc(select=[a, c, _UTF-16LE'#' AS $f2, MOD(HASH_CODE(c), 
> 1024) AS $f3, MOD(HASH_CODE(_UTF-16LE'#'), 1024) AS $f4])
>                +- MiniBatchAssigner(interval=[1000ms], mode=[ProcTime])
>                   +- DataStreamScan(table=[[default_catalog, 
> default_database, T]], fields=[a, b, c]) {code}
> The final `GroupAggregate` missing the delimiter args, and the default 
> delimiter `,` will be used.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33446) SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation doesn't produce the correct plan

2023-11-03 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33446:
--
Description: 
Although this test doesn't throw an exception, the final plan produces 3 
columns rather than 2 after optimization.

{code:java}
LogicalProject(inputs=[0..1], exprs=[[$4]])
+- LogicalFilter(condition=[IS NULL($5)])
   +- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalProject(inputs=[0..2], exprs=[[true]])
 +- LogicalAggregate(group=[{0, 1, 2}])
+- LogicalProject(inputs=[0..2])
   +- LogicalFilter(condition=[IS NULL($3)])
  +- LogicalJoin(condition=[true], joinType=[left])
 :- LogicalFilter(condition=[IS NOT NULL($0)])
 :  +- LogicalProject(exprs=[[+($0, 1)]])
 : +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d, e, f)]]])
 +- LogicalProject(inputs=[0..1], exprs=[[true]])
+- LogicalAggregate(group=[{0, 1}])
   +- LogicalProject(exprs=[[$3, $0]])
  +- LogicalFilter(condition=[AND(=($1, $0), 
=(CAST($2):BIGINT, $3))])
 +- LogicalProject(exprs=[[+($0, 4), +($0, 5), 
+($0, 6), CAST(+($0, 6)):BIGINT]])
+- 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[TestTableSource(i, j, k)]]])

{code}

After digging, I think it's the SubQueryRemoveRule doesn't generate the 
Correlate but generates the Join node, which causes the failure of the 
decorrelation. For a quick fix, I think we should throw an exception to notify 
users it's not a supported feature in the Flink. 

There might exist 2 ways to fix this issue:
1. Expand subquery when converting SQL to rel.  After experimenting with 
calcite, I found that the Sql2RelConverter generates the correct plan.

{code:java}
LogicalProject(inputs=[0..1])
+- LogicalFilter(condition=[IS NULL($2)])
   +- LogicalCorrelate(correlation=[$cor7], joinType=[left], 
requiredColumns=[{0, 1}])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor7.d2), IS NULL($1))])
   +- LogicalCorrelate(correlation=[$cor4], joinType=[left], 
requiredColumns=[{0}])
  :- LogicalProject(inputs=[0])
  :  +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d1, e, f)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor4.d1), =($1, 
$cor4.d1), =(CAST($2):BIGINT, $cor7.d3))])
   +- LogicalProject(exprs=[[+($0, 4), +($0, 5), +($0, 
6)]])
  +- LogicalTableScan(table=[[default_catalog, 
default_database, t, source: [TestTableSource(i, j, k)]]])
{code}

You can find the new plan uses a correlate node rather than a join node.

2. CALCITE-5789 has fix this problem by removing the nested correlation node.







  was:
Although this test doesn't throw an exception, the final plan produces 3 
columns rather than 2 after optimization.

{code:java}
LogicalProject(inputs=[0..1], exprs=[[$4]])
+- LogicalFilter(condition=[IS NULL($5)])
   +- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalProject(inputs=[0..2], exprs=[[true]])
 +- LogicalAggregate(group=[{0, 1, 2}])
+- LogicalProject(inputs=[0..2])
   +- LogicalFilter(condition=[IS NULL($3)])
  +- LogicalJoin(condition=[true], joinType=[left])
 :- LogicalFilter(condition=[IS NOT NULL($0)])
 :  +- LogicalProject(exprs=[[+($0, 1)]])
 : +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d, e, f)]]])
 +- LogicalProject(inputs=[0..1], exprs=[[true]])
+- LogicalAggregate(group=[{0, 1}])
   +- LogicalProject(exprs=[[$3, $0]])
  +- LogicalFilter(condition=[AND(=($1, $0), 
=(CAST($2):BIGINT, $3))])
 +- 

[jira] [Updated] (FLINK-33446) SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation doesn't produce the correct plan

2023-11-02 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33446:
--
Description: 
Although this test doesn't throw an exception, the final plan produces 3 
columns rather than 2 after optimization.

{code:java}
LogicalProject(inputs=[0..1], exprs=[[$4]])
+- LogicalFilter(condition=[IS NULL($5)])
   +- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalProject(inputs=[0..2], exprs=[[true]])
 +- LogicalAggregate(group=[{0, 1, 2}])
+- LogicalProject(inputs=[0..2])
   +- LogicalFilter(condition=[IS NULL($3)])
  +- LogicalJoin(condition=[true], joinType=[left])
 :- LogicalFilter(condition=[IS NOT NULL($0)])
 :  +- LogicalProject(exprs=[[+($0, 1)]])
 : +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d, e, f)]]])
 +- LogicalProject(inputs=[0..1], exprs=[[true]])
+- LogicalAggregate(group=[{0, 1}])
   +- LogicalProject(exprs=[[$3, $0]])
  +- LogicalFilter(condition=[AND(=($1, $0), 
=(CAST($2):BIGINT, $3))])
 +- LogicalProject(exprs=[[+($0, 4), +($0, 5), 
+($0, 6), CAST(+($0, 6)):BIGINT]])
+- 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[TestTableSource(i, j, k)]]])

{code}

After digging, I think it's the SubQueryRemoveRule doesn't generate the 
Correlate but generates the Join node, which causes the failure of the 
decorrelation. For a quick fix, I think we should throw an exception to notify 
users it's not a supported feature in the Flink. 

There might exist 2 ways to fix this issue:
1. Expand subquery when converting SQL to rel.  After experimenting with 
calcite, I found that the Sql2RelConverter generates the correct plan.

{code:java}
LogicalProject(inputs=[0..1])
+- LogicalFilter(condition=[IS NULL($2)])
   +- LogicalCorrelate(correlation=[$cor7], joinType=[left], 
requiredColumns=[{0, 1}])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor7.d2), IS NULL($1))])
   +- LogicalCorrelate(correlation=[$cor4], joinType=[left], 
requiredColumns=[{0}])
  :- LogicalProject(inputs=[0])
  :  +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d1, e, f)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor4.d1), =($1, 
$cor4.d1), =(CAST($2):BIGINT, $cor7.d3))])
   +- LogicalProject(exprs=[[+($0, 4), +($0, 5), +($0, 
6)]])
  +- LogicalTableScan(table=[[default_catalog, 
default_database, t, source: [TestTableSource(i, j, k)]]])
{code}

You can find the new plan uses a correlate node rather than a join node.

2. CALCITE-4686 might fix this problem by removing the nested correlation node.







  was:
Although this test doesn't throw an exception, you can find the final produce 3 
columns rather than 2 columns after optimization.

{code:java}
LogicalProject(inputs=[0..1], exprs=[[$4]])
+- LogicalFilter(condition=[IS NULL($5)])
   +- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalProject(inputs=[0..2], exprs=[[true]])
 +- LogicalAggregate(group=[{0, 1, 2}])
+- LogicalProject(inputs=[0..2])
   +- LogicalFilter(condition=[IS NULL($3)])
  +- LogicalJoin(condition=[true], joinType=[left])
 :- LogicalFilter(condition=[IS NOT NULL($0)])
 :  +- LogicalProject(exprs=[[+($0, 1)]])
 : +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d, e, f)]]])
 +- LogicalProject(inputs=[0..1], exprs=[[true]])
+- LogicalAggregate(group=[{0, 1}])
   +- LogicalProject(exprs=[[$3, $0]])
  +- LogicalFilter(condition=[AND(=($1, $0), 
=(CAST($2):BIGINT, $3))])
 +- 

[jira] [Created] (FLINK-33446) SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation doesn't produce the correct plan

2023-11-02 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-33446:
-

 Summary: 
SubQueryAntiJoinTest#testMultiNotExistsWithCorrelatedOnWhere_NestedCorrelation 
doesn't produce the correct plan
 Key: FLINK-33446
 URL: https://issues.apache.org/jira/browse/FLINK-33446
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.2, 1.19.0, 1.18.1
Reporter: Shengkai Fang


Although this test doesn't throw an exception, you can find the final produce 3 
columns rather than 2 columns after optimization.

{code:java}
LogicalProject(inputs=[0..1], exprs=[[$4]])
+- LogicalFilter(condition=[IS NULL($5)])
   +- LogicalJoin(condition=[AND(=($0, $2), =($1, $3))], joinType=[left])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalProject(inputs=[0..2], exprs=[[true]])
 +- LogicalAggregate(group=[{0, 1, 2}])
+- LogicalProject(inputs=[0..2])
   +- LogicalFilter(condition=[IS NULL($3)])
  +- LogicalJoin(condition=[true], joinType=[left])
 :- LogicalFilter(condition=[IS NOT NULL($0)])
 :  +- LogicalProject(exprs=[[+($0, 1)]])
 : +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d, e, f)]]])
 +- LogicalProject(inputs=[0..1], exprs=[[true]])
+- LogicalAggregate(group=[{0, 1}])
   +- LogicalProject(exprs=[[$3, $0]])
  +- LogicalFilter(condition=[AND(=($1, $0), 
=(CAST($2):BIGINT, $3))])
 +- LogicalProject(exprs=[[+($0, 4), +($0, 5), 
+($0, 6), CAST(+($0, 6)):BIGINT]])
+- 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[TestTableSource(i, j, k)]]])

{code}

After digging, I think it's the SubQueryRemoveRule doesn't generate the 
Correlate but generates the Join node, which causes the failure of the 
decorrelation. For a quick fix, I think we should throw an exception to notify 
users it's not a supported feature in the Flink. 

There might exist 2 ways to fix this issue:
1. Expand subquery when converting SQL to rel.  After experimenting with 
calcite, I find the Sql2RelConverter generates the correct plan.

{code:java}
LogicalProject(inputs=[0..1])
+- LogicalFilter(condition=[IS NULL($2)])
   +- LogicalCorrelate(correlation=[$cor7], joinType=[left], 
requiredColumns=[{0, 1}])
  :- LogicalProject(exprs=[[+(2, $0), +(3, $1)]])
  :  +- LogicalTableScan(table=[[default_catalog, default_database, l, 
source: [TestTableSource(a, b, c)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor7.d2), IS NULL($1))])
   +- LogicalCorrelate(correlation=[$cor4], joinType=[left], 
requiredColumns=[{0}])
  :- LogicalProject(inputs=[0])
  :  +- LogicalTableScan(table=[[default_catalog, 
default_database, r, source: [TestTableSource(d1, e, f)]]])
  +- LogicalAggregate(group=[{}], agg#0=[MIN($0)])
 +- LogicalProject(exprs=[[true]])
+- LogicalFilter(condition=[AND(=($0, $cor4.d1), =($1, 
$cor4.d1), =(CAST($2):BIGINT, $cor7.d3))])
   +- LogicalProject(exprs=[[+($0, 4), +($0, 5), +($0, 
6)]])
  +- LogicalTableScan(table=[[default_catalog, 
default_database, t, source: [TestTableSource(i, j, k)]]])
{code}

You can find the new plan uses a correlate node rather than join node.

2. CALCITE-4686 might fix this problem by removing the nested correlation node.









--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33226) Forbid to drop current database

2023-10-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33226:
--
Fix Version/s: 1.19.0

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33226) Forbid to drop current database

2023-10-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-33226:
--
Affects Version/s: 1.19.0

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33226) Forbid to drop current database

2023-10-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33226.
-
Resolution: Fixed

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0
>Reporter: Shengkai Fang
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33226) Forbid to drop current database

2023-10-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33226:
-

Assignee: Ferenc Csaky

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shengkai Fang
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33226) Forbid to drop current database

2023-10-17 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776167#comment-17776167
 ] 

Shengkai Fang commented on FLINK-33226:
---

Merged into master: a29a320187a45ca2579a4cdeddeb99a974d9cf2d

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-22870) Grouping sets + case when + constant string throws AssertionError

2023-10-16 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-22870:
-

Assignee: Zheng yunhong

> Grouping sets + case when + constant string throws AssertionError
> -
>
> Key: FLINK-22870
> URL: https://issues.apache.org/jira/browse/FLINK-22870
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Caizhi Weng
>Assignee: Zheng yunhong
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Add the following case to 
> {{org.apache.flink.table.api.TableEnvironmentITCase}} to reproduce this issue.
> {code:scala}
> @Test
> def myTest2(): Unit = {
>   tEnv.executeSql(
> """
>   |create temporary table my_source(
>   |  a INT
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |create temporary view my_view as select a, 'test' as b from my_source
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |create temporary view my_view2 as select
>   |  a,
>   |  case when GROUPING(b) = 1 then 'test2' else b end as b
>   |from my_view
>   |group by grouping sets(
>   |  (),
>   |  (a),
>   |  (b),
>   |  (a, b)
>   |)
>   |""".stripMargin)
>   System.out.println(tEnv.explainSql(
> """
>   |select a, b from my_view2
>   |""".stripMargin))
> }
> {code}
> The exception stack is
> {code}
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER a, VARCHAR(5) CHARACTER SET "UTF-16LE" NOT NULL b) NOT NULL
> converted type:
> RecordType(INTEGER a, VARCHAR(5) CHARACTER SET "UTF-16LE" b) NOT NULL
> rel:
> LogicalProject(a=[$0], b=[CASE(=($2, 1), _UTF-16LE'test2':VARCHAR(5) 
> CHARACTER SET "UTF-16LE", CAST($1):VARCHAR(5) CHARACTER SET "UTF-16LE")])
>   LogicalAggregate(group=[{0, 1}], groups=[[{0, 1}, {0}, {1}, {}]], 
> agg#0=[GROUPING($1)])
> LogicalProject(a=[$0], b=[_UTF-16LE'test'])
>   LogicalTableScan(table=[[default_catalog, default_database, my_source]])
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:467)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:582)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:177)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:169)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1048)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertViewQuery(SqlToOperationConverter.java:897)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertCreateView(SqlToOperationConverter.java:864)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:259)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>   at 
> org.apache.flink.table.api.TableEnvironmentITCase.myTest2(TableEnvironmentITCase.scala:148)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33226) Forbid to drop current database

2023-10-09 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-33226:
-

 Summary: Forbid to drop current database
 Key: FLINK-33226
 URL: https://issues.apache.org/jira/browse/FLINK-33226
 Project: Flink
  Issue Type: Improvement
Reporter: Shengkai Fang


PG or MySql both doesn't support to drop the current database. PG throws the 
following exception.


{code:java}
test=# drop database
test-# test;
ERROR:  cannot drop the currently open database
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33000) SqlGatewayServiceITCase should utilize TestExecutorExtension instead of using a ThreadFactory

2023-09-27 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33000:
-

Assignee: Jiabao Sun

> SqlGatewayServiceITCase should utilize TestExecutorExtension instead of using 
> a ThreadFactory
> -
>
> Key: FLINK-33000
> URL: https://issues.apache.org/jira/browse/FLINK-33000
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway, Tests
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available, starter
>
> {{SqlGatewayServiceITCase}} uses a {{ExecutorThreadFactory}} for its 
> asynchronous operations. Instead, one should use {{TestExecutorExtension}} to 
> ensure proper cleanup of threads.
> We might also want to remove the {{AbstractTestBase}} parent class because 
> that uses JUnit4 whereas {{SqlGatewayServiceITCase}} is already based on 
> JUnit5



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33000) SqlGatewayServiceITCase should utilize TestExecutorExtension instead of using a ThreadFactory

2023-09-27 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769470#comment-17769470
 ] 

Shengkai Fang commented on FLINK-33000:
---

Merged into master:

de6c66ea805760f1550ae1fa348630edd9f17256
73717520cc63df8bd08fd008ac004659b210cfd1
de6c66ea805760f1550ae1fa348630edd9f17256

TODO:

Merged into release 1.18/1.17/1.16

> SqlGatewayServiceITCase should utilize TestExecutorExtension instead of using 
> a ThreadFactory
> -
>
> Key: FLINK-33000
> URL: https://issues.apache.org/jira/browse/FLINK-33000
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway, Tests
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, starter
>
> {{SqlGatewayServiceITCase}} uses a {{ExecutorThreadFactory}} for its 
> asynchronous operations. Instead, one should use {{TestExecutorExtension}} to 
> ensure proper cleanup of threads.
> We might also want to remove the {{AbstractTestBase}} parent class because 
> that uses JUnit4 whereas {{SqlGatewayServiceITCase}} is already based on 
> JUnit5



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-26 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764498#comment-17764498
 ] 

Shengkai Fang edited comment on FLINK-33064 at 9/26/23 9:10 AM:


Merged into master: be509e6d67471d886e58d3ddea6ddd3627a191a8

Reverted because some bad cases.


was (Author: fsk119):
Merged into master: be509e6d67471d886e58d3ddea6ddd3627a191a8

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-26 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764498#comment-17764498
 ] 

Shengkai Fang edited comment on FLINK-33064 at 9/26/23 9:10 AM:


Merged into master: be509e6d67471d886e58d3ddea6ddd3627a191a8

Reverted because of some bad cases.


was (Author: fsk119):
Merged into master: be509e6d67471d886e58d3ddea6ddd3627a191a8

Reverted because some bad cases.

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-26 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reopened FLINK-33064:
---

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-12 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33064:
-

Assignee: Yunhong Zheng

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-12 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764498#comment-17764498
 ] 

Shengkai Fang commented on FLINK-33064:
---

Merged into master: be509e6d67471d886e58d3ddea6ddd3627a191a8

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33064) Improve the error message when the lookup source is used as the scan source

2023-09-12 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33064.
-
Resolution: Fixed

> Improve the error message when the lookup source is used as the scan source
> ---
>
> Key: FLINK-33064
> URL: https://issues.apache.org/jira/browse/FLINK-33064
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Improve the error message when the lookup source is used as the scan source. 
> Currently, if we use a source which only implement LookupTableSource but not 
> implement ScanTableSource, as a scan source, it cannot get a property plan 
> and give a '
> Cannot generate a valid execution plan for the given query' which can be 
> improved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-09-12 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-32731.
-
Fix Version/s: 1.18.0
   1.19.0
   Resolution: Fixed

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.18.0, 1.19.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> Aug 02 02:14:04   at 
> 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-09-12 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764050#comment-17764050
 ] 

Shengkai Fang commented on FLINK-32731:
---

Merged into release-1.18: 
3dcdc7f29384bc399e65ce46253975570e93481f
9e5659ea65278b2b699ab0c0f0eafc918a0107bc

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> 

[jira] [Closed] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-12 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-32988.
-
Resolution: Fixed

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-12 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763984#comment-17763984
 ] 

Shengkai Fang edited comment on FLINK-32988 at 9/12/23 7:22 AM:


Merged into master: 649b7fe197c8b03cce9595adcfea33c8d708a8b4
Merged into release-1.18: 9e5659ea65278b2b699ab0c0f0eafc918a0107bc


was (Author: fsk119):
Merged into master: 649b7fe197c8b03cce9595adcfea33c8d708a8b4

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-11 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-32988:
--
Affects Version/s: (was: 1.16.2)
   (was: 1.17.1)

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33063) udaf with user defined pojo object throw error while generate record equaliser

2023-09-11 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-33063.
-
Resolution: Fixed

> udaf with user defined pojo object throw error while generate record equaliser
> --
>
> Key: FLINK-33063
> URL: https://issues.apache.org/jira/browse/FLINK-33063
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Udaf with user define pojo object throw error while generating record 
> equaliser: 
> When user create an udaf while recore contains user define complex pojo 
> object (like List or Map). The codegen 
> will throw error while generating record equaliser, the error is:
> {code:java}
> A method named "compareTo" is not declared in any enclosing class nor any 
> subtype, nor through a static import.{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33063) udaf with user defined pojo object throw error while generate record equaliser

2023-09-11 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763994#comment-17763994
 ] 

Shengkai Fang commented on FLINK-33063:
---

Merged into release-1.18: f2584a1df364a14ff50b5a52fe7cf5e38d4cdc9a


> udaf with user defined pojo object throw error while generate record equaliser
> --
>
> Key: FLINK-33063
> URL: https://issues.apache.org/jira/browse/FLINK-33063
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Udaf with user define pojo object throw error while generating record 
> equaliser: 
> When user create an udaf while recore contains user define complex pojo 
> object (like List or Map). The codegen 
> will throw error while generating record equaliser, the error is:
> {code:java}
> A method named "compareTo" is not declared in any enclosing class nor any 
> subtype, nor through a static import.{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32952) Scan reuse with readable metadata and watermark push down will get wrong watermark

2023-09-11 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-32952.
-
Resolution: Fixed

> Scan reuse with readable metadata and watermark push down will get wrong 
> watermark 
> ---
>
> Key: FLINK-32952
> URL: https://issues.apache.org/jira/browse/FLINK-32952
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Scan reuse with readable metadata and watermark push down will get wrong 
> result. In class ScanReuser, we will re-build watermark spec after projection 
> push down. However, we will get wrong index while try to find index in new 
> source type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32952) Scan reuse with readable metadata and watermark push down will get wrong watermark

2023-09-11 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763993#comment-17763993
 ] 

Shengkai Fang commented on FLINK-32952:
---

Merged into release-1.18: f66679bfe5f5b344eec71a7579504762cc3c04ae

> Scan reuse with readable metadata and watermark push down will get wrong 
> watermark 
> ---
>
> Key: FLINK-32952
> URL: https://issues.apache.org/jira/browse/FLINK-32952
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Scan reuse with readable metadata and watermark push down will get wrong 
> result. In class ScanReuser, we will re-build watermark spec after projection 
> push down. However, we will get wrong index while try to find index in new 
> source type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-11 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763984#comment-17763984
 ] 

Shengkai Fang commented on FLINK-32988:
---

Merged into master: 649b7fe197c8b03cce9595adcfea33c8d708a8b4

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-09-07 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762682#comment-17762682
 ] 

Shengkai Fang commented on FLINK-32731:
---

The current fix doesn't work in the hive3 envrionment. I have opened a PR to 
fix this. But the fix works in the hive2 environment.

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-30 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760705#comment-17760705
 ] 

Shengkai Fang commented on FLINK-32731:
---

> How do you find out whether "it works"?

I just try to observe whether the test fails again in the daily run tests. But 
I find I can not find the flink-ci.flink-master-mirror pipeline anymore..

>  It affects 1.18 but the change you documented was only merged to master.

Sure. I will cherry-pick this to release-1.18


> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-27 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759404#comment-17759404
 ] 

Shengkai Fang commented on FLINK-32731:
---

Merged into master: 4b84b6cd5983ae8f058fae731eb0f4af6214b738

Waiting to check whether it works...

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> Aug 02 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757780#comment-17757780
 ] 

Shengkai Fang commented on FLINK-32731:
---

Thanks for the sharing. I think we should add some retry mechanism to restart 
the container when namenode fails. I will open a PR to fix this soon.

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> 

[jira] [Updated] (FLINK-25447) batch query cannot generate plan when a sorted view into multi sinks

2023-08-09 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-25447:
--
Fix Version/s: 1.17.2
   1.19.0

> batch query cannot generate plan when a sorted view into multi sinks
> 
>
> Key: FLINK-25447
> URL: https://issues.apache.org/jira/browse/FLINK-25447
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.2
>Reporter: lincoln lee
>Assignee: Zheng yunhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> A batch query  write a sorted view into multi sinks will get a cannot plan 
> exception
> {code}
>   @Test
>   def testSortedResultIntoMultiSinks(): Unit = {
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE Src (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING,
>  |  `e` STRING
>  |) WITH (
>  |  'connector' = 'values',
>  |  'bounded' = 'true'
>  |)
>""".stripMargin)
> val query = "SELECT * FROM Src order by c"
> val table = util.tableEnv.sqlQuery(query)
> util.tableEnv.registerTable("sortedTable", table)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink1 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>""".stripMargin)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink2 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>   """.stripMargin)
> val stmtSet= util.tableEnv.createStatementSet()
> stmtSet.addInsertSql(
>   "insert into sink1 select a, b, listagg(d) from sortedTable group by a, 
> b")
> stmtSet.addInsertSql(
>   "insert into sink2 select a, b, c, d from sortedTable")
> util.verifyExecPlan(stmtSet)
>   }
> {code}
> {code}
>   org.apache.flink.table.api.TableException: Cannot generate a valid 
> execution plan for the given query: 
>   LogicalSink(table=[default_catalog.default_database.sink2], fields=[a, b, 
> c, d])
>   +- LogicalProject(inputs=[0..3])
>  +- LogicalTableScan(table=[[IntermediateRelTable_0]])
>   This exception indicates that the query uses an unsupported SQL feature.
>   Please check the documentation for the set of currently supported SQL 
> features.
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:76)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>   at scala.collection.Iterator.foreach(Iterator.scala:937)
>   at scala.collection.Iterator.foreach$(Iterator.scala:937)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:88)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:59)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at 

[jira] [Commented] (FLINK-25447) batch query cannot generate plan when a sorted view into multi sinks

2023-08-09 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17752573#comment-17752573
 ] 

Shengkai Fang commented on FLINK-25447:
---

Merged into master: 9231d62d25dd60bc82c0635e7554d1494e4fc82b

> batch query cannot generate plan when a sorted view into multi sinks
> 
>
> Key: FLINK-25447
> URL: https://issues.apache.org/jira/browse/FLINK-25447
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.2
>Reporter: lincoln lee
>Assignee: Zheng yunhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> A batch query  write a sorted view into multi sinks will get a cannot plan 
> exception
> {code}
>   @Test
>   def testSortedResultIntoMultiSinks(): Unit = {
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE Src (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING,
>  |  `e` STRING
>  |) WITH (
>  |  'connector' = 'values',
>  |  'bounded' = 'true'
>  |)
>""".stripMargin)
> val query = "SELECT * FROM Src order by c"
> val table = util.tableEnv.sqlQuery(query)
> util.tableEnv.registerTable("sortedTable", table)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink1 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>""".stripMargin)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink2 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>   """.stripMargin)
> val stmtSet= util.tableEnv.createStatementSet()
> stmtSet.addInsertSql(
>   "insert into sink1 select a, b, listagg(d) from sortedTable group by a, 
> b")
> stmtSet.addInsertSql(
>   "insert into sink2 select a, b, c, d from sortedTable")
> util.verifyExecPlan(stmtSet)
>   }
> {code}
> {code}
>   org.apache.flink.table.api.TableException: Cannot generate a valid 
> execution plan for the given query: 
>   LogicalSink(table=[default_catalog.default_database.sink2], fields=[a, b, 
> c, d])
>   +- LogicalProject(inputs=[0..3])
>  +- LogicalTableScan(table=[[IntermediateRelTable_0]])
>   This exception indicates that the query uses an unsupported SQL feature.
>   Please check the documentation for the set of currently supported SQL 
> features.
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:76)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>   at scala.collection.Iterator.foreach(Iterator.scala:937)
>   at scala.collection.Iterator.foreach$(Iterator.scala:937)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:88)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:59)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> 

[jira] [Assigned] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-09 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-32731:
-

Assignee: Shengkai Fang

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> Aug 02 02:14:04   at 
> 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-09 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17752306#comment-17752306
 ] 

Shengkai Fang commented on FLINK-32731:
---

Merged into master: 5654eb798c744c924aff93d68ec3c4e413e75232 

Waiting for more details.

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> Aug 02 02:14:04   at 
> 

[jira] [Updated] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-07 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-32731:
--
Priority: Major  (was: Blocker)

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused)
> Aug 02 02:14:04   at 
> 

[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-07 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17751683#comment-17751683
 ] 

Shengkai Fang commented on FLINK-32731:
---

[~mapohl] hi. I think the failed test is because the test failed to start the 
hive container. We can find the container has the following message:

 
{code:java}
02:13:03,693 [docker-java-stream--148018928] INFO  
org.apache.flink.table.gateway.containers.HiveContainer  [] - STDOUT: 
2023-08-02 07:58:03,693 INFO gave up: hdfs-namenode entered FATAL state, too 
many start retries too quickly
{code}

Because it's just a test issue and the current information doesn't help us to 
understand why namenode fails to start up, I think can reduce the priority. I 
will open a PR to back up the namenode and metastore logs when test fails again.



> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> 

[jira] [Closed] (FLINK-32616) FlinkStatement#executeQuery resource leaks when the input sql is not query

2023-07-20 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-32616.
-
Resolution: Fixed

> FlinkStatement#executeQuery resource leaks when the input sql is not query
> --
>
> Key: FLINK-32616
> URL: https://issues.apache.org/jira/browse/FLINK-32616
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Shengkai Fang
>Assignee: FangYong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> The current implementation just throw the exception if the input sql is not 
> query. No one is responsible to close the StatementResult.
>  
> BTW, the current implementation just submit the sql to the gateway, which 
> means the sql is executed. I just wonder do we need to expose this features 
> to the users?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32618) Remove the dependency of the flink-core in the flink-sql-jdbc-driver-bundle

2023-07-20 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-32618.
-
Fix Version/s: (was: 1.18.0)
   Resolution: Later

> Remove the dependency of the flink-core in the flink-sql-jdbc-driver-bundle
> ---
>
> Key: FLINK-32618
> URL: https://issues.apache.org/jira/browse/FLINK-32618
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32616) FlinkStatement#executeQuery resource leaks when the input sql is not query

2023-07-20 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745349#comment-17745349
 ] 

Shengkai Fang commented on FLINK-32616:
---

Merged into master: b9c5dc3d73161622908f90ac876f4ec002b1186e

> FlinkStatement#executeQuery resource leaks when the input sql is not query
> --
>
> Key: FLINK-32616
> URL: https://issues.apache.org/jira/browse/FLINK-32616
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> The current implementation just throw the exception if the input sql is not 
> query. No one is responsible to close the StatementResult.
>  
> BTW, the current implementation just submit the sql to the gateway, which 
> means the sql is executed. I just wonder do we need to expose this features 
> to the users?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32616) FlinkStatement#executeQuery resource leaks when the input sql is not query

2023-07-20 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-32616:
-

Assignee: FangYong

> FlinkStatement#executeQuery resource leaks when the input sql is not query
> --
>
> Key: FLINK-32616
> URL: https://issues.apache.org/jira/browse/FLINK-32616
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Shengkai Fang
>Assignee: FangYong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> The current implementation just throw the exception if the input sql is not 
> query. No one is responsible to close the StatementResult.
>  
> BTW, the current implementation just submit the sql to the gateway, which 
> means the sql is executed. I just wonder do we need to expose this features 
> to the users?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32617) FlinkResultSetMetaData throw exception for most methods

2023-07-20 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745348#comment-17745348
 ] 

Shengkai Fang commented on FLINK-32617:
---

Merged into master: a5a741876ab4392b922fc5858f9ad679fb7eabc7

> FlinkResultSetMetaData throw exception for most methods
> ---
>
> Key: FLINK-32617
> URL: https://issues.apache.org/jira/browse/FLINK-32617
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shengkai Fang
>Assignee: FangYong
>Priority: Major
>  Labels: pull-request-available
>
> I think most methods, e.g.
>  
> ```
> boolean supportsMultipleResultSets() throws SQLException;
> boolean supportsMultipleTransactions() throws SQLException;
> boolean supportsMinimumSQLGrammar() throws SQLException;
> ```
> We can just return true or false.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >