[jira] [Updated] (FLINK-31695) Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory throws NoSuchMethodError

2023-04-03 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31695:

Summary: Calling bin/flink stop when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory throws 
NoSuchMethodError  (was: Calling bin/flink stop when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos 
NoSuchMethodError)

> Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in 
> lib directory throws NoSuchMethodError
> --
>
> Key: FLINK-31695
> URL: https://issues.apache.org/jira/browse/FLINK-31695
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Caizhi Weng
>Priority: Major
>
> To reproduce this bug, follow these steps:
> 1. Download 
> [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar|https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-2-uber/2.8.3-10.0]
>  and put it in the {{lib}} directory.
> 2. Run {{bin/flink stop}}
> The exception stack is
> {code}
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.CommandLine.hasOption(Lorg/apache/commons/cli/Option;)Z
>   at org.apache.flink.client.cli.StopOptions.(StopOptions.java:53)
>   at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:539)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1102)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1165)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31695) Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos NoSuchMethodError

2023-04-03 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31695:

Summary: Calling bin/flink stop when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos 
NoSuchMethodError  (was: Calling `bin/flink stop` when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos 
NoSuchMethodError)

> Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in 
> lib directory thwos NoSuchMethodError
> -
>
> Key: FLINK-31695
> URL: https://issues.apache.org/jira/browse/FLINK-31695
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Caizhi Weng
>Priority: Major
>
> To reproduce this bug, follow these steps:
> 1. Download 
> [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar|https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-2-uber/2.8.3-10.0]
>  and put it in the {{lib}} directory.
> 2. Run {{bin/flink stop}}
> The exception stack is
> {code}
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.CommandLine.hasOption(Lorg/apache/commons/cli/Option;)Z
>   at org.apache.flink.client.cli.StopOptions.(StopOptions.java:53)
>   at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:539)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1102)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1165)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31695) Calling `bin/flink stop` when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos NoSuchMethodError

2023-04-03 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31695:
---

 Summary: Calling `bin/flink stop` when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos 
NoSuchMethodError
 Key: FLINK-31695
 URL: https://issues.apache.org/jira/browse/FLINK-31695
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.15.4, 1.16.1, 1.17.0
Reporter: Caizhi Weng


To reproduce this bug, follow these steps:

1. Download 
[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar|https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-2-uber/2.8.3-10.0]
 and put it in the {{lib}} directory.
2. Run {{bin/flink stop}}

The exception stack is

{code}
java.lang.NoSuchMethodError: 
org.apache.commons.cli.CommandLine.hasOption(Lorg/apache/commons/cli/Option;)Z
at org.apache.flink.client.cli.StopOptions.(StopOptions.java:53)
at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:539)
at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1102)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1165)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1165)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31659) java.lang.ClassNotFoundException: org.apache.flink.table.planner.delegation.DialectFactory when bundled Hive connector jar is in classpath

2023-03-29 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31659:

Description: 
Steps to reproduce this bug:
1. Download a fresh 1.16.1 Flink distribution.
2. Add both {{flink-sql-connector-hive-2.3.9_2.12-1.16.0.jar}} and 
{{flink-shaded-hadoop-2-uber-2.8.3-10.0.jar}} into {{lib}} directory.
3. Start a standalone cluster with {{bin/start-cluster.sh}}.
4. Start SQL client and run the following SQL.

{code:sql}
create table T (
  a int,
  b string
) with (
  'connector' = 'filesystem',
  'format' = 'csv',
  'path' = '/tmp/gao.csv'
);

create table S (
  a int,
  b string
) with (
  'connector' = 'print'
);

insert into S select * from T;
{code}

The following exception will occur.

{code}
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to parse 
statement: insert into S select * from T;
at 
org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:295)
 [flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:280)
 [flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:228)
 [flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
[flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
[flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
[flink-sql-client-1.16.1.jar:1.16.1]
Caused by: org.apache.flink.table.api.ValidationException: Unable to create a 
source for reading table 'default_catalog.default_database.T'.

Table options are:

'connector'='filesystem'
'format'='json'
'path'='/tmp/gao.json'
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:166)
 ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:191)
 ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:175)
 ~[?:?]
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:115)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3619) 
~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2559)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2175)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2095)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2038)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:669)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:657)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3462)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
 ~[?:?]
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:215)
 ~[?:?]
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:191)
 ~[?:?]
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1498)
 ~[?:?]
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1253)
 ~[?:?]
at 

[jira] [Created] (FLINK-31659) java.lang.ClassNotFoundException: org.apache.flink.table.planner.delegation.DialectFactory when bundled Hive connector jar is in classpath

2023-03-29 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31659:
---

 Summary: java.lang.ClassNotFoundException: 
org.apache.flink.table.planner.delegation.DialectFactory when bundled Hive 
connector jar is in classpath
 Key: FLINK-31659
 URL: https://issues.apache.org/jira/browse/FLINK-31659
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.15.4, 1.16.1, 1.17.0
Reporter: Caizhi Weng


Steps to reproduce this bug:
1. Download a fresh 1.16.1 Flink distribution.
2. Add both {{flink-sql-connector-hive-2.3.9_2.12-1.16.0.jar}} and 
{{flink-shaded-hadoop-2-uber-2.8.3-10.0.jar}} into {{lib}} directory.
3. Start a standalone cluster with {{bin/start-cluster.sh}}.
4. Start SQL client and run the following SQL.

{code:sql}
create table T (
  a int,
  b string
) with (
  'connector' = 'filesystem',
  'format' = 'csv',
  'path' = '/tmp/gao.csv'
);

create table S (
  a int,
  b string
) with (
  'connector' = 'print'
);

insert into S select * from T;
{code}

The following exception will occur.

{code}
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to parse 
statement: insert into S select * from T;
at 
org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71)
 ~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) 
~[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:295)
 [flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:280)
 [flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:228)
 [flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
[flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
[flink-sql-client-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
[flink-sql-client-1.16.1.jar:1.16.1]
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
[flink-sql-client-1.16.1.jar:1.16.1]
Caused by: org.apache.flink.table.api.ValidationException: Unable to create a 
source for reading table 'default_catalog.default_database.T'.

Table options are:

'connector'='filesystem'
'format'='json'
'path'='/tmp/gao.json'
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:166)
 ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:191)
 ~[flink-table-api-java-uber-1.16.1.jar:1.16.1]
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:175)
 ~[?:?]
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:115)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3619) 
~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2559)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2175)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2095)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2038)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:669)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:657)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3462)
 ~[?:?]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
 ~[?:?]
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:215)
 ~[?:?]
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:191)
 ~[?:?]
at 

[jira] [Closed] (FLINK-31558) Support updating schema type with restrictions in CDC sink of Table Store

2023-03-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31558.
---
Resolution: Fixed

master: 4322595dfc1809199e3a815c7fa82ccb0750165a

> Support updating schema type with restrictions in CDC sink of Table Store
> -
>
> Key: FLINK-31558
> URL: https://issues.apache.org/jira/browse/FLINK-31558
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> It is common for database users to update column types to a wider type, such 
> as changing {{INT}} types to {{BIGINT}}s, and {{CHAR}}s to {{VARCHAR}}s. We 
> should support these common usages.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31558) Support updating schema type with restrictions in CDC sink of Table Store

2023-03-22 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31558:
---

 Summary: Support updating schema type with restrictions in CDC 
sink of Table Store
 Key: FLINK-31558
 URL: https://issues.apache.org/jira/browse/FLINK-31558
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


It is common for database users to update column types to a wider type, such as 
changing {{INT}} types to {{BIGINT}}s, and {{CHAR}}s to {{VARCHAR}}s. We should 
support these common usages.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31434) Introduce CDC sink for Table Store

2023-03-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31434.
---
Resolution: Fixed

master: ecce2cc876c1bd9095abef4fdf24400bf179fefa

> Introduce CDC sink for Table Store
> --
>
> Key: FLINK-31434
> URL: https://issues.apache.org/jira/browse/FLINK-31434
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> To directly consume changes from Flink CDC connectors, we need a special CDC 
> sink for Flink Table Store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31433) Make SchemaChange serializable

2023-03-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31433.
---
Resolution: Fixed

master: 3179e184da24a09fa67e3bab6bbbcd9a338634e5

> Make SchemaChange serializable
> --
>
> Key: FLINK-31433
> URL: https://issues.apache.org/jira/browse/FLINK-31433
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> To avoid concurrent changes to table schema, CDC sinks for Flink Table Store 
> should send all \{{SchemaChange}} to a special process function. This process 
> function only has 1 parallelism and it is dedicated for schema changes.
>  
> To pass \{{SchemaChange}} through network, \{{SchemaChange}} must be 
> serializable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31432) Introduce a special StoreWriteOperator to deal with schema changes

2023-03-20 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31432.
---
Resolution: Fixed

master: 556443f098252a98610a03450ed8ad2a661f27dd

> Introduce a special StoreWriteOperator to deal with schema changes
> --
>
> Key: FLINK-31432
> URL: https://issues.apache.org/jira/browse/FLINK-31432
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> Currently \{{StoreWriteOperator}} is not able to deal with schema changes. We 
> need to introduce a special \{{StoreWriteOperator}} to deal with schema 
> changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31431) Support copying a FileStoreTable with latest schema

2023-03-14 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31431.
---
Resolution: Fixed

master: c155b9d7c0a6ef40327116cac88659c6be975919

> Support copying a FileStoreTable with latest schema
> ---
>
> Key: FLINK-31431
> URL: https://issues.apache.org/jira/browse/FLINK-31431
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> To capture schema changes, CDC sinks of Flink Table Store should be able to 
> use the latest schema at any time. This requires us to copy a 
> \{{FileStoreTable}} with latest schema so that we can create \{{TableWrite}} 
> with latest schema.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31436) Remove schemaId from constructor of FileStoreCommitImpl and ManifestFile in Table Store

2023-03-14 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31436.
---
Resolution: Fixed

master: a651431f6c9e3e742e2b505920c43e2c18d76502

> Remove schemaId from constructor of FileStoreCommitImpl and ManifestFile in 
> Table Store
> ---
>
> Key: FLINK-31436
> URL: https://issues.apache.org/jira/browse/FLINK-31436
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> As schema may change during a CTAS streaming job, the schema ID of snapshots 
> and manifest files may also change. We should remove \{{schemaId}} from their 
> constructor and calculate the real \{{schemaId}} on the fly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31436) Remove schemaId from constructor of FileStoreCommitImpl and ManifestFile in Table Store

2023-03-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31436:

Summary: Remove schemaId from constructor of FileStoreCommitImpl and 
ManifestFile in Table Store  (was: Remove schemaId from constructor of Snapshot 
and ManifestFile in Table Store)

> Remove schemaId from constructor of FileStoreCommitImpl and ManifestFile in 
> Table Store
> ---
>
> Key: FLINK-31436
> URL: https://issues.apache.org/jira/browse/FLINK-31436
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> As schema may change during a CTAS streaming job, the schema ID of snapshots 
> and manifest files may also change. We should remove \{{schemaId}} from their 
> constructor and calculate the real \{{schemaId}} on the fly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31436) Remove schemaId from constructor of Snapshot and ManifestFile in Table Store

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31436:
---

 Summary: Remove schemaId from constructor of Snapshot and 
ManifestFile in Table Store
 Key: FLINK-31436
 URL: https://issues.apache.org/jira/browse/FLINK-31436
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


As schema may change during a CTAS streaming job, the schema ID of snapshots 
and manifest files may also change. We should remove \{{schemaId}} from their 
constructor and calculate the real \{{schemaId}} on the fly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31435) Introduce event parser for MySql Debezium JSON format in Table Store

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31435:
---

 Summary: Introduce event parser for MySql Debezium JSON format in 
Table Store
 Key: FLINK-31435
 URL: https://issues.apache.org/jira/browse/FLINK-31435
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


MySQL is widely used among Flink CDC connector users. We should first support 
consuming changes from MySQL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31434) Introduce CDC sink for Table Store

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31434:
---

 Summary: Introduce CDC sink for Table Store
 Key: FLINK-31434
 URL: https://issues.apache.org/jira/browse/FLINK-31434
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


To directly consume changes from Flink CDC connectors, we need a special CDC 
sink for Flink Table Store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31433) Make SchemaChange serializable

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31433:
---

 Summary: Make SchemaChange serializable
 Key: FLINK-31433
 URL: https://issues.apache.org/jira/browse/FLINK-31433
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


To avoid concurrent changes to table schema, CDC sinks for Flink Table Store 
should send all \{{SchemaChange}} to a special process function. This process 
function only has 1 parallelism and it is dedicated for schema changes.

 

To pass \{{SchemaChange}} through network, \{{SchemaChange}} must be 
serializable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31432) Introduce a special StoreWriteOperator to deal with schema changes

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31432:
---

 Summary: Introduce a special StoreWriteOperator to deal with 
schema changes
 Key: FLINK-31432
 URL: https://issues.apache.org/jira/browse/FLINK-31432
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


Currently \{{StoreWriteOperator}} is not able to deal with schema changes. We 
need to introduce a special \{{StoreWriteOperator}} to deal with schema changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31431) Support copying a FileStoreTable with latest schema

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31431:
---

 Summary: Support copying a FileStoreTable with latest schema
 Key: FLINK-31431
 URL: https://issues.apache.org/jira/browse/FLINK-31431
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


To capture schema changes, CDC sinks of Flink Table Store should be able to use 
the latest schema at any time. This requires us to copy a \{{FileStoreTable}} 
with latest schema so that we can create \{{TableWrite}} with latest schema.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31430) Support migrating states between different instances of TableWriteImpl and AbstractFileStoreWrite

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31430:
---

 Summary: Support migrating states between different instances of 
TableWriteImpl and AbstractFileStoreWrite
 Key: FLINK-31430
 URL: https://issues.apache.org/jira/browse/FLINK-31430
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


Currently {{Table}} and {{TableWrite}} in Flink Table Store have a fixed 
schema. However to consume schema changes, Flink Table Store CDC sinks should 
have the ability to change its schema during a streaming job.

This require us to pause and store the states of a {{TableWrite}}, then create 
a {{TableWrite}} with newer schema and recover the states in the new 
{{TableWrite}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31429) Support CTAS(create table as) streaming job with schema changes in table store

2023-03-13 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-31429:
---

 Summary: Support CTAS(create table as) streaming job with schema 
changes in table store
 Key: FLINK-31429
 URL: https://issues.apache.org/jira/browse/FLINK-31429
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0


Currently CDC connectors for Flink has the ability to stream out records 
changes and schema changes of a database table. Flink Table Store should have 
the ability to directly consume these changes, including schema changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31269) Split hive connector to each module of each version

2023-03-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31269.
---
  Assignee: Shammon
Resolution: Fixed

master: dd2d600f6743ca074a023fdbb1a7a2cbcfbf8ff0

> Split hive connector to each module of each version
> ---
>
> Key: FLINK-31269
> URL: https://issues.apache.org/jira/browse/FLINK-31269
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31178) Public Writer API

2023-03-01 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31178.
---
Resolution: Fixed

master: 9b8dc4e2f3cf8bc28a95d5381c8544868bd5b688

> Public Writer API
> -
>
> Key: FLINK-31178
> URL: https://issues.apache.org/jira/browse/FLINK-31178
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31248) Improve documentation for append-only table

2023-03-01 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31248.
---
Resolution: Fixed

master: b8a700082a6032dfed7cee4273f3f76ce0483b5a

> Improve documentation for append-only table
> ---
>
> Key: FLINK-31248
> URL: https://issues.apache.org/jira/browse/FLINK-31248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng edited comment on FLINK-31143 at 2/23/23 3:21 AM:
--

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

It is actually not easy to implement this. The biggest problem is that collect 
client is a stateful client and it stores some records not consumed by the user 
yet. If we want to support {{collect}} + savepoint, we will first need to find 
out where to store these unconsumed records. That is, we might need to 
introduce something like client state backend.

A less user-friendly but more feasible solution is that we require the user to 
consume all records in the {{collect}} iterator after a savepoint, before 
closing the iterator and starting a new job. If the user does not obey this 
requirement some records might be lost.

In any aspect, supporting {{collect}} + savepoint will be a new feature. As we 
are very close to releasing I think it is proper to introduce it (if needed) in 
future versions.


was (Author: tsreaper):
Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

It is actually not easy to implement this. The biggest problem is that collect 
client is a stateful client and it stores some records not consumed by the user 
yet. If we want to support {{collect}} + savepoint, we will first need to find 
out where to store these unconsumed records. That is, we might need to 
introduce something like client state backend.

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> 

[jira] [Comment Edited] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng edited comment on FLINK-31143 at 2/23/23 3:17 AM:
--

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

It is actually not easy to implement this. The biggest problem is that collect 
client is a stateful client and it stores some records not consumed by the user 
yet. If we want to support {{collect}} + savepoint, we will first need to find 
out where to store these unconsumed records. That is, we might need to 
introduce something like client state backend.


was (Author: tsreaper):
Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

It is actually not east to implement this. The biggest problem is that collect 
client is a stateful client and it stores some records not consumed by the user 
yet. If we want to support {{collect}} + savepoint, we will first need to find 
out where to store these unconsumed records. That is, we might need to 
introduce something like client state backend.

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(restartConfig)
> .build();
> final TableEnvironment restartTableEnv = 
> TableEnvironment.create(restartSettings);
> restartTableEnv.executeSql(createTableQuery);
> restartTableEnv.sqlQuery(selectQuery).execute().print();
> }
> {code}
> h3. Expected behavior
> The job continues omitting the 

[jira] [Comment Edited] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng edited comment on FLINK-31143 at 2/23/23 3:17 AM:
--

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

It is actually not east to implement this. The biggest problem is that collect 
client is a stateful client and it stores some records not consumed by the user 
yet. If we want to support {{collect}} + savepoint, we will first need to find 
out where to store these unconsumed records. That is, we might need to 
introduce something like client state backend.


was (Author: tsreaper):
Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(restartConfig)
> .build();
> final TableEnvironment restartTableEnv = 
> TableEnvironment.create(restartSettings);
> restartTableEnv.executeSql(createTableQuery);
> restartTableEnv.sqlQuery(selectQuery).execute().print();
> }
> {code}
> h3. Expected behavior
> The job continues omitting the inital two records and starts printing results 
> from 2 onwards.
> h3. Observed behavior
> No results are printed. The logs show that an invalid request was handled:
> {code:java}
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction [] - 
> Invalid request. Received version = b497a74f-e85c-404b-8df3-1b4b1a0c2de1, 
> offset = 0, while expected 

[jira] [Comment Edited] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng edited comment on FLINK-31143 at 2/23/23 3:14 AM:
--

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.


was (Author: tsreaper):
Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not 
supported. I can implement this but I think this is actually a new feature 
rather than a bug fix. As we're very close to releasing I think it might be 
more proper to introduce this feature in 1.17.1. What do you think?

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(restartConfig)
> .build();
> final TableEnvironment restartTableEnv = 
> TableEnvironment.create(restartSettings);
> restartTableEnv.executeSql(createTableQuery);
> restartTableEnv.sqlQuery(selectQuery).execute().print();
> }
> {code}
> h3. Expected behavior
> The job continues omitting the inital two records and starts printing results 
> from 2 onwards.
> h3. Observed behavior
> No results are printed. The logs show that an invalid request was handled:
> {code:java}
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction [] - 
> Invalid request. Received version = b497a74f-e85c-404b-8df3-1b4b1a0c2de1, 
> offset = 0, while expected version = b497a74f-e85c-404b-8df3-1b4b1a0c2de1, 
> offset = 1
> {code}
> It looks like the right offset is not picked up from the savepoint (see 
> 

[jira] [Comment Edited] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng edited comment on FLINK-31143 at 2/23/23 3:13 AM:
--

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not 
supported. I can implement this but I think this is actually a new feature 
rather than a bug fix. As we're very close to releasing I think it might be 
more proper to introduce this feature in 1.17.1. What do you think?


was (Author: tsreaper):
Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

However it isn't hard to implement because the collect sink has sent everything 
needed to the client, just that client didn't handle the case. I can implement 
this but I think this is actually a new feature rather than a bug fix. As we're 
very close to releasing I think it might be more proper to introduce this 
feature in 1.17.1. What do you think?

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(restartConfig)
> .build();
> final TableEnvironment restartTableEnv = 
> TableEnvironment.create(restartSettings);
> restartTableEnv.executeSql(createTableQuery);
> restartTableEnv.sqlQuery(selectQuery).execute().print();
> }
> {code}
> h3. Expected behavior
> The job continues omitting the inital two records and starts printing results 
> from 2 onwards.
> h3. Observed behavior
> No results are printed. The logs show that an invalid request was 

[jira] [Commented] (FLINK-31143) Invalid request: offset doesn't match when restarting from a savepoint

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692463#comment-17692463
 ] 

Caizhi Weng commented on FLINK-31143:
-

Hi [~mapohl] [~Weijie Guo].

When implementing {{collect}} I only considered job restarts and didn't 
consider client restarts, so {{collect}} + savepoint is currently not supported.

However it isn't hard to implement because the collect sink has sent everything 
needed to the client, just that client didn't handle the case. I can implement 
this but I think this is actually a new feature rather than a bug fix. As we're 
very close to releasing I think it might be more proper to introduce this 
feature in 1.17.1. What do you think?

> Invalid request: offset doesn't match when restarting from a savepoint
> --
>
> Key: FLINK-31143
> URL: https://issues.apache.org/jira/browse/FLINK-31143
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Critical
>
> I tried to run the following case:
> {code:java}
> public static void main(String[] args) throws Exception {
> final String createTableQuery =
> "CREATE TABLE left_table (a int, c varchar) "
> + "WITH ("
> + " 'connector' = 'datagen', "
> + " 'rows-per-second' = '1', "
> + " 'fields.a.kind' = 'sequence', "
> + " 'fields.a.start' = '0', "
> + " 'fields.a.end' = '10'"
> + ");";
> final String selectQuery = "SELECT * FROM left_table;";
> final Configuration initialConfig = new Configuration();
> initialConfig.set(CoreOptions.DEFAULT_PARALLELISM, 1);
> final EnvironmentSettings initialSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(initialConfig)
> .build();
> final TableEnvironment initialTableEnv = 
> TableEnvironment.create(initialSettings);
> // create job and consume two results
> initialTableEnv.executeSql(createTableQuery);
> final TableResult tableResult = 
> initialTableEnv.sqlQuery(selectQuery).execute();
> tableResult.await();
> System.out.println(tableResultIterator.next()); 
> System.out.println(tableResultIterator.next());          
> // stop job with savepoint
> final String savepointPath;
> try (CloseableIterator tableResultIterator = 
> tableResult.collect()) {
> final JobClient jobClient =
> 
> tableResult.getJobClient().orElseThrow(IllegalStateException::new);
> final File savepointDirectory = Files.createTempDir();
> savepointPath =
> jobClient
> .stopWithSavepoint(
> true,
> savepointDirectory.getAbsolutePath(),
> SavepointFormatType.CANONICAL)
> .get();
> }
> // restart the very same job from the savepoint
> final SavepointRestoreSettings savepointRestoreSettings =
> SavepointRestoreSettings.forPath(savepointPath, true);
> final Configuration restartConfig = new Configuration(initialConfig);
> SavepointRestoreSettings.toConfiguration(savepointRestoreSettings, 
> restartConfig);
> final EnvironmentSettings restartSettings =
> EnvironmentSettings.newInstance()
> .inStreamingMode()
> .withConfiguration(restartConfig)
> .build();
> final TableEnvironment restartTableEnv = 
> TableEnvironment.create(restartSettings);
> restartTableEnv.executeSql(createTableQuery);
> restartTableEnv.sqlQuery(selectQuery).execute().print();
> }
> {code}
> h3. Expected behavior
> The job continues omitting the inital two records and starts printing results 
> from 2 onwards.
> h3. Observed behavior
> No results are printed. The logs show that an invalid request was handled:
> {code:java}
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction [] - 
> Invalid request. Received version = b497a74f-e85c-404b-8df3-1b4b1a0c2de1, 
> offset = 0, while expected version = b497a74f-e85c-404b-8df3-1b4b1a0c2de1, 
> offset = 1
> {code}
> It looks like the right offset is not picked up from the savepoint (see 
> 

[jira] [Updated] (FLINK-31120) ConcurrentModificationException occurred in StringFunctionsITCase.test

2023-02-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31120:

Release Note:   (was: master: fa6a2aed6136ae59ed14cd01819e8f94867840b7
release-1.17: 7201d0aa1c69ae0a52f07cea38a0545203f67c17
release-1.16: 4ba657ea60452f16d3f6175031f8471b3b7f042f)

> ConcurrentModificationException occurred in StringFunctionsITCase.test
> --
>
> Key: FLINK-31120
> URL: https://issues.apache.org/jira/browse/FLINK-31120
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Shuiqiang Chen
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46255=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12334
> {code}
> Feb 17 04:51:25 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.725 s <<< FAILURE! - in 
> org.apache.flink.table.planner.functions.StringFunctionsITCase
> Feb 17 04:51:25 [ERROR] 
> org.apache.flink.table.planner.functions.StringFunctionsITCase.test(TestCase)[4]
>  Time elapsed: 4.367 s <<< ERROR!
> Feb 17 04:51:25 org.apache.flink.table.api.TableException: Failed to execute 
> sql
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:974)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$ResultTestItem.test(BuiltInFunctionTestBase.java:354)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestSetSpec.lambda$getTestCase$4(BuiltInFunctionTestBase.java:320)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestCase.execute(BuiltInFunctionTestBase.java:113)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase.test(BuiltInFunctionTestBase.java:93)
> Feb 17 04:51:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31120) ConcurrentModificationException occurred in StringFunctionsITCase.test

2023-02-22 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692453#comment-17692453
 ] 

Caizhi Weng commented on FLINK-31120:
-

master: fa6a2aed6136ae59ed14cd01819e8f94867840b7
release-1.17: 7201d0aa1c69ae0a52f07cea38a0545203f67c17
release-1.16: 4ba657ea60452f16d3f6175031f8471b3b7f042f

> ConcurrentModificationException occurred in StringFunctionsITCase.test
> --
>
> Key: FLINK-31120
> URL: https://issues.apache.org/jira/browse/FLINK-31120
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Shuiqiang Chen
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46255=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12334
> {code}
> Feb 17 04:51:25 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.725 s <<< FAILURE! - in 
> org.apache.flink.table.planner.functions.StringFunctionsITCase
> Feb 17 04:51:25 [ERROR] 
> org.apache.flink.table.planner.functions.StringFunctionsITCase.test(TestCase)[4]
>  Time elapsed: 4.367 s <<< ERROR!
> Feb 17 04:51:25 org.apache.flink.table.api.TableException: Failed to execute 
> sql
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:974)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$ResultTestItem.test(BuiltInFunctionTestBase.java:354)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestSetSpec.lambda$getTestCase$4(BuiltInFunctionTestBase.java:320)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestCase.execute(BuiltInFunctionTestBase.java:113)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase.test(BuiltInFunctionTestBase.java:93)
> Feb 17 04:51:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31120) ConcurrentModificationException occurred in StringFunctionsITCase.test

2023-02-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31120:

Fix Version/s: 1.17.0
   1.16.2
   1.18.0

> ConcurrentModificationException occurred in StringFunctionsITCase.test
> --
>
> Key: FLINK-31120
> URL: https://issues.apache.org/jira/browse/FLINK-31120
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Shuiqiang Chen
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.17.0, 1.16.2, 1.18.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46255=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12334
> {code}
> Feb 17 04:51:25 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.725 s <<< FAILURE! - in 
> org.apache.flink.table.planner.functions.StringFunctionsITCase
> Feb 17 04:51:25 [ERROR] 
> org.apache.flink.table.planner.functions.StringFunctionsITCase.test(TestCase)[4]
>  Time elapsed: 4.367 s <<< ERROR!
> Feb 17 04:51:25 org.apache.flink.table.api.TableException: Failed to execute 
> sql
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:974)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$ResultTestItem.test(BuiltInFunctionTestBase.java:354)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestSetSpec.lambda$getTestCase$4(BuiltInFunctionTestBase.java:320)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestCase.execute(BuiltInFunctionTestBase.java:113)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase.test(BuiltInFunctionTestBase.java:93)
> Feb 17 04:51:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31120) ConcurrentModificationException occurred in StringFunctionsITCase.test

2023-02-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-31120.
---
Release Note: 
master: fa6a2aed6136ae59ed14cd01819e8f94867840b7
release-1.17: 7201d0aa1c69ae0a52f07cea38a0545203f67c17
release-1.16: 4ba657ea60452f16d3f6175031f8471b3b7f042f
  Resolution: Fixed

> ConcurrentModificationException occurred in StringFunctionsITCase.test
> --
>
> Key: FLINK-31120
> URL: https://issues.apache.org/jira/browse/FLINK-31120
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Shuiqiang Chen
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46255=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12334
> {code}
> Feb 17 04:51:25 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.725 s <<< FAILURE! - in 
> org.apache.flink.table.planner.functions.StringFunctionsITCase
> Feb 17 04:51:25 [ERROR] 
> org.apache.flink.table.planner.functions.StringFunctionsITCase.test(TestCase)[4]
>  Time elapsed: 4.367 s <<< ERROR!
> Feb 17 04:51:25 org.apache.flink.table.api.TableException: Failed to execute 
> sql
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:974)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$ResultTestItem.test(BuiltInFunctionTestBase.java:354)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestSetSpec.lambda$getTestCase$4(BuiltInFunctionTestBase.java:320)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestCase.execute(BuiltInFunctionTestBase.java:113)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase.test(BuiltInFunctionTestBase.java:93)
> Feb 17 04:51:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31120) ConcurrentModificationException occurred in StringFunctionsITCase.test

2023-02-21 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691900#comment-17691900
 ] 

Caizhi Weng commented on FLINK-31120:
-

I've left my review in the github PR. My main concern is that why do we need a 
static variable. I see [~chesnay] is the author of the related code. Would you 
please take a look?

> ConcurrentModificationException occurred in StringFunctionsITCase.test
> --
>
> Key: FLINK-31120
> URL: https://issues.apache.org/jira/browse/FLINK-31120
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Shuiqiang Chen
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46255=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12334
> {code}
> Feb 17 04:51:25 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.725 s <<< FAILURE! - in 
> org.apache.flink.table.planner.functions.StringFunctionsITCase
> Feb 17 04:51:25 [ERROR] 
> org.apache.flink.table.planner.functions.StringFunctionsITCase.test(TestCase)[4]
>  Time elapsed: 4.367 s <<< ERROR!
> Feb 17 04:51:25 org.apache.flink.table.api.TableException: Failed to execute 
> sql
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:974)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> Feb 17 04:51:25 at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$ResultTestItem.test(BuiltInFunctionTestBase.java:354)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestSetSpec.lambda$getTestCase$4(BuiltInFunctionTestBase.java:320)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase$TestCase.execute(BuiltInFunctionTestBase.java:113)
> Feb 17 04:51:25 at 
> org.apache.flink.table.planner.functions.BuiltInFunctionTestBase.test(BuiltInFunctionTestBase.java:93)
> Feb 17 04:51:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25813) TableITCase.testCollectWithClose failed on azure

2023-02-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-25813:

Fix Version/s: 1.17.0
   1.16.2

> TableITCase.testCollectWithClose failed on azure
> 
>
> Key: FLINK-25813
> URL: https://issues.apache.org/jira/browse/FLINK-25813
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.17.0, 1.16.2
>
>
> {code:java}
> 2022-01-25T08:35:25.3735884Z Jan 25 08:35:25 [ERROR] 
> TableITCase.testCollectWithClose  Time elapsed: 0.377 s  <<< FAILURE!
> 2022-01-25T08:35:25.3737127Z Jan 25 08:35:25 java.lang.AssertionError: Values 
> should be different. Actual: RUNNING
> 2022-01-25T08:35:25.3738167Z Jan 25 08:35:25  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-25T08:35:25.3739085Z Jan 25 08:35:25  at 
> org.junit.Assert.failEquals(Assert.java:187)
> 2022-01-25T08:35:25.3739922Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:163)
> 2022-01-25T08:35:25.3740846Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:177)
> 2022-01-25T08:35:25.3742302Z Jan 25 08:35:25  at 
> org.apache.flink.table.api.TableITCase.testCollectWithClose(TableITCase.scala:135)
> 2022-01-25T08:35:25.3743327Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-25T08:35:25.3744343Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-25T08:35:25.3745575Z Jan 25 08:35:25  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-25T08:35:25.3746840Z Jan 25 08:35:25  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-25T08:35:25.3747922Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-25T08:35:25.3749151Z Jan 25 08:35:25  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-25T08:35:25.3750422Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-25T08:35:25.3751820Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-25T08:35:25.3753196Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-25T08:35:25.3754253Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-01-25T08:35:25.3755441Z Jan 25 08:35:25  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
> 2022-01-25T08:35:25.3756656Z Jan 25 08:35:25  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-25T08:35:25.3757778Z Jan 25 08:35:25  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-25T08:35:25.3758821Z Jan 25 08:35:25  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-25T08:35:25.3759840Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-25T08:35:25.3760919Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-25T08:35:25.3762249Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-25T08:35:25.3763322Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-25T08:35:25.3764436Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-25T08:35:25.3765907Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-25T08:35:25.3766957Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-25T08:35:25.3768104Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-25T08:35:25.3769128Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-25T08:35:25.3770125Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-01-25T08:35:25.3771118Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-01-25T08:35:25.3772264Z Jan 25 08:35:25  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2022-01-25T08:35:25.3773118Z Jan 25 08:35:25  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 

[jira] [Closed] (FLINK-25813) TableITCase.testCollectWithClose failed on azure

2023-02-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-25813.
---
Resolution: Fixed

master: 3a647d6bec7d0413bcbc71668ed45be57bb4bfe6
release-1.17: bb66b989188c661d9d7847b6a5728697a3a2dcd2
release-1.16: c0d52c09340cb865c01c1590dd2803dab48e097a

> TableITCase.testCollectWithClose failed on azure
> 
>
> Key: FLINK-25813
> URL: https://issues.apache.org/jira/browse/FLINK-25813
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> 2022-01-25T08:35:25.3735884Z Jan 25 08:35:25 [ERROR] 
> TableITCase.testCollectWithClose  Time elapsed: 0.377 s  <<< FAILURE!
> 2022-01-25T08:35:25.3737127Z Jan 25 08:35:25 java.lang.AssertionError: Values 
> should be different. Actual: RUNNING
> 2022-01-25T08:35:25.3738167Z Jan 25 08:35:25  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-25T08:35:25.3739085Z Jan 25 08:35:25  at 
> org.junit.Assert.failEquals(Assert.java:187)
> 2022-01-25T08:35:25.3739922Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:163)
> 2022-01-25T08:35:25.3740846Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:177)
> 2022-01-25T08:35:25.3742302Z Jan 25 08:35:25  at 
> org.apache.flink.table.api.TableITCase.testCollectWithClose(TableITCase.scala:135)
> 2022-01-25T08:35:25.3743327Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-25T08:35:25.3744343Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-25T08:35:25.3745575Z Jan 25 08:35:25  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-25T08:35:25.3746840Z Jan 25 08:35:25  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-25T08:35:25.3747922Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-25T08:35:25.3749151Z Jan 25 08:35:25  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-25T08:35:25.3750422Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-25T08:35:25.3751820Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-25T08:35:25.3753196Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-25T08:35:25.3754253Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-01-25T08:35:25.3755441Z Jan 25 08:35:25  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
> 2022-01-25T08:35:25.3756656Z Jan 25 08:35:25  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-25T08:35:25.3757778Z Jan 25 08:35:25  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-25T08:35:25.3758821Z Jan 25 08:35:25  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-25T08:35:25.3759840Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-25T08:35:25.3760919Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-25T08:35:25.3762249Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-25T08:35:25.3763322Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-25T08:35:25.3764436Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-25T08:35:25.3765907Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-25T08:35:25.3766957Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-25T08:35:25.3768104Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-25T08:35:25.3769128Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-25T08:35:25.3770125Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-01-25T08:35:25.3771118Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-01-25T08:35:25.3772264Z Jan 25 08:35:25  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 

[jira] [Closed] (FLINK-29237) RexSimplify can not be removed after update to calcite 1.27

2023-02-10 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-29237.
---
Fix Version/s: 1.18.0
 Assignee: Sergey Nuyanzin
   Resolution: Fixed

master: d9102ddd755ec26d14256bcb733d814e063acd2b

> RexSimplify can not be removed after update to calcite 1.27
> ---
>
> Key: FLINK-29237
> URL: https://issues.apache.org/jira/browse/FLINK-29237
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> It seems there is some work should be done to make it happen
> Currently removal of RexSimplify from Flink repo leads to failure of several 
> tests like
> {{IntervalJoinTest#testFallbackToRegularJoin}}
> {{CalcITCase#testOrWithIsNullInIf}}
> {{CalcITCase#testOrWithIsNullPredicate}}
> example of failure
> {noformat}
> Sep 07 11:25:08 java.lang.AssertionError: 
> Sep 07 11:25:08 
> Sep 07 11:25:08 Results do not match for query:
> Sep 07 11:25:08   
> Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
> Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
> Sep 07 11:25:08 
> Sep 07 11:25:08 
> Sep 07 11:25:08 Results
> Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
> Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
> Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
> Sep 07 11:25:08 !+I[null, 999, NullTuple]   
> Sep 07 11:25:08 !+I[null, 999, NullTuple]   
> Sep 07 11:25:08 
> Sep 07 11:25:08 Plan:
> Sep 07 11:25:08   == Abstract Syntax Tree ==
> Sep 07 11:25:08 LogicalProject(inputs=[0..2])
> Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
> NULL($0))])
> Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
> default_database, NullTable3]])
> Sep 07 11:25:08 
> Sep 07 11:25:08 == Optimized Logical Plan ==
> Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
> TRUE])])
> Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, 
> default_database, NullTable3]], fields=[a, b, c])
> Sep 07 11:25:08 
> Sep 07 11:25:08
> Sep 07 11:25:08   at org.junit.Assert.fail(Assert.java:89)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
> Sep 07 11:25:08   at scala.Option.foreach(Option.scala:257)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Sep 07 11:25:08 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30927) Several tests started generate output with two non-abstract methods have the same parameter types, declaring type and return type

2023-02-09 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30927.
---
Fix Version/s: (was: 1.16.2)
   Resolution: Fixed

master: 96a296db723575d64857482a1278744e4c41201f
release-1.17: fd2ccf2d9586e2ffb92d8a6ccb5a5a303d32ef2a

> Several tests started generate output with two non-abstract methods  have the 
> same parameter types, declaring type and return type
> --
>
> Key: FLINK-30927
> URL: https://issues.apache.org/jira/browse/FLINK-30927
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0, 1.16.2
>Reporter: Sergey Nuyanzin
>Assignee: Krzysztof Chmielewski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> e.g. 
> org.apache.flink.table.planner.runtime.stream.sql.MatchRecognizeITCase#testUserDefinedFunctions
>  
> it seems during code splitter it starts generating some methods with same 
> signature
>  
> {noformat}
> org.codehaus.janino.InternalCompilerException: Compiling 
> "MatchRecognizePatternProcessFunction$77": Two non-abstract methods "default 
> void MatchRecognizePatternProcessFunction$77.processMatch_0(java.util.Map, 
> org.apache.flink.cep.functions.PatternProcessFunction$Context, 
> org.apache.flink.util.Collector) throws java.lang.Exception" have the same 
> parameter types, declaring type and return type
> {noformat}
>  
> Probably could be a side effect of 
> https://issues.apache.org/jira/browse/FLINK-27246



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30927) Several tests started generate output with two non-abstract methods have the same parameter types, declaring type and return type

2023-02-09 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30927:

Affects Version/s: (was: 1.16.2)

> Several tests started generate output with two non-abstract methods  have the 
> same parameter types, declaring type and return type
> --
>
> Key: FLINK-30927
> URL: https://issues.apache.org/jira/browse/FLINK-30927
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Sergey Nuyanzin
>Assignee: Krzysztof Chmielewski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> e.g. 
> org.apache.flink.table.planner.runtime.stream.sql.MatchRecognizeITCase#testUserDefinedFunctions
>  
> it seems during code splitter it starts generating some methods with same 
> signature
>  
> {noformat}
> org.codehaus.janino.InternalCompilerException: Compiling 
> "MatchRecognizePatternProcessFunction$77": Two non-abstract methods "default 
> void MatchRecognizePatternProcessFunction$77.processMatch_0(java.util.Map, 
> org.apache.flink.cep.functions.PatternProcessFunction$Context, 
> org.apache.flink.util.Collector) throws java.lang.Exception" have the same 
> parameter types, declaring type and return type
> {noformat}
>  
> Probably could be a side effect of 
> https://issues.apache.org/jira/browse/FLINK-27246



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-27246) Code of method "processElement(Lorg/apache/flink/streaming/runtime/streamrecord/StreamRecord;)V" of class "HashAggregateWithKeys$9211" grows beyond 64 KB

2023-02-09 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-27246.
---
Resolution: Fixed

master: af9a1128f728c691b896bc9c591e9be1327601c4
release-1.16: f8bf7cc0c26773eecc928401f0c9ca5c714cd7bf

> Code of method 
> "processElement(Lorg/apache/flink/streaming/runtime/streamrecord/StreamRecord;)V"
>  of class "HashAggregateWithKeys$9211" grows beyond 64 KB
> -
>
> Key: FLINK-27246
> URL: https://issues.apache.org/jira/browse/FLINK-27246
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.15.3, 1.16.1
>Reporter: Maciej Bryński
>Assignee: Krzysztof Chmielewski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.2
>
> Attachments: endInput_falseFilter9123_split9704.txt
>
>
> I think this bug should get fixed in 
> https://issues.apache.org/jira/browse/FLINK-23007
> Unfortunately I spotted it on Flink 1.14.3
> {code}
> java.lang.RuntimeException: Could not instantiate generated class 
> 'HashAggregateWithKeys$9211'
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:85)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:40)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:81)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:198)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:63)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:666)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at java.lang.Thread.run(Unknown Source) ~[?:?]
> Caused by: org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:83)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   ... 11 more
> Caused by: 
> org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException:
>  org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> 

[jira] [Commented] (FLINK-25813) TableITCase.testCollectWithClose failed on azure

2023-02-06 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17685066#comment-17685066
 ] 

Caizhi Weng commented on FLINK-25813:
-

Hi [~mapohl]. Sorry for this late reply as I've almost forgotten this ticket. 
I've submitted a pull request and zentol reviewed. I asked him a question but 
he didn't reply so I guess things just stop there.

This ticket is just about an unstable test and it shouldn't have any impact on 
the users, so this is not a blocker for releases. If we want to resolve this 
ticket I'd like someone to help for the review. Thanks.

> TableITCase.testCollectWithClose failed on azure
> 
>
> Key: FLINK-25813
> URL: https://issues.apache.org/jira/browse/FLINK-25813
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> 2022-01-25T08:35:25.3735884Z Jan 25 08:35:25 [ERROR] 
> TableITCase.testCollectWithClose  Time elapsed: 0.377 s  <<< FAILURE!
> 2022-01-25T08:35:25.3737127Z Jan 25 08:35:25 java.lang.AssertionError: Values 
> should be different. Actual: RUNNING
> 2022-01-25T08:35:25.3738167Z Jan 25 08:35:25  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-25T08:35:25.3739085Z Jan 25 08:35:25  at 
> org.junit.Assert.failEquals(Assert.java:187)
> 2022-01-25T08:35:25.3739922Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:163)
> 2022-01-25T08:35:25.3740846Z Jan 25 08:35:25  at 
> org.junit.Assert.assertNotEquals(Assert.java:177)
> 2022-01-25T08:35:25.3742302Z Jan 25 08:35:25  at 
> org.apache.flink.table.api.TableITCase.testCollectWithClose(TableITCase.scala:135)
> 2022-01-25T08:35:25.3743327Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-25T08:35:25.3744343Z Jan 25 08:35:25  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-25T08:35:25.3745575Z Jan 25 08:35:25  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-25T08:35:25.3746840Z Jan 25 08:35:25  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-25T08:35:25.3747922Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-25T08:35:25.3749151Z Jan 25 08:35:25  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-25T08:35:25.3750422Z Jan 25 08:35:25  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-25T08:35:25.3751820Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-25T08:35:25.3753196Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-25T08:35:25.3754253Z Jan 25 08:35:25  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-01-25T08:35:25.3755441Z Jan 25 08:35:25  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
> 2022-01-25T08:35:25.3756656Z Jan 25 08:35:25  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-25T08:35:25.3757778Z Jan 25 08:35:25  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-25T08:35:25.3758821Z Jan 25 08:35:25  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-25T08:35:25.3759840Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-25T08:35:25.3760919Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-25T08:35:25.3762249Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-25T08:35:25.3763322Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-25T08:35:25.3764436Z Jan 25 08:35:25  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-25T08:35:25.3765907Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-25T08:35:25.3766957Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-25T08:35:25.3768104Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-25T08:35:25.3769128Z Jan 25 08:35:25  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-25T08:35:25.3770125Z Jan 25 08:35:25  at 
> 

[jira] [Closed] (FLINK-30861) Table Store Hive Catalog throws java.lang.ClassNotFoundException: org.apache.hadoop.hive.common.ValidWriteIdList under certain environment

2023-02-01 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30861.
---
Resolution: Fixed

master: 5e09dfec910c2aa2dfcf3eca4adcb8191524bf04
release-0.3: 78fdd09077431a7284cd8519cca5ae06c7acb60e

> Table Store Hive Catalog throws java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.common.ValidWriteIdList under certain environment
> --
>
> Key: FLINK-30861
> URL: https://issues.apache.org/jira/browse/FLINK-30861
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.4.0, table-store-0.3.1
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0, table-store-0.3.1
>
>
> Table Store Hive Catalog throws {{java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.common.ValidWriteIdList}} under certain environment. 
> We need to package {{hive-storage-api}} dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30861) Table Store Hive Catalog throws java.lang.ClassNotFoundException: org.apache.hadoop.hive.common.ValidWriteIdList under certain environment

2023-01-31 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30861:
---

 Summary: Table Store Hive Catalog throws 
java.lang.ClassNotFoundException: 
org.apache.hadoop.hive.common.ValidWriteIdList under certain environment
 Key: FLINK-30861
 URL: https://issues.apache.org/jira/browse/FLINK-30861
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.4.0, table-store-0.3.1
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.4.0, table-store-0.3.1


Table Store Hive Catalog throws {{java.lang.ClassNotFoundException: 
org.apache.hadoop.hive.common.ValidWriteIdList}} under certain environment. We 
need to package {{hive-storage-api}} dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30620) Table Store Hive catalog supports specifying custom Hive metastore client

2023-01-12 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30620.
---
Resolution: Fixed

master: 6144fa3a65356322b800d0b392099f0ac92043f3
release-0.3: 7a03fcc2b63aa8ca502c244df8da91a67588f7b5

> Table Store Hive catalog supports specifying custom Hive metastore client
> -
>
> Key: FLINK-30620
> URL: https://issues.apache.org/jira/browse/FLINK-30620
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0, table-store-0.3.1
>
>
> Currently Hive metastore client class is hard coded in {{HiveCatalog}}, 
> however users may want to specify custom Hive metastore client to read and 
> write Hive compliant storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30646) Table Store Hive catalog throws ClassNotFoundException when custom hive-site.xml is presented

2023-01-12 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30646.
---
Resolution: Fixed

master: 072640a72fe18bf3c0439cd4f0ec7602e0b0ff80
release-0.3: fcea22e114888af239e3a04e130f942df655122e

> Table Store Hive catalog throws ClassNotFoundException when custom 
> hive-site.xml is presented
> -
>
> Key: FLINK-30646
> URL: https://issues.apache.org/jira/browse/FLINK-30646
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0, table-store-0.3.1
>
>
> For Hive 2.3.9, if a custom {{hive-site.xml}} is presented in 
> {{$HIVE_HOME/conf}}, when creating Table Store Hive catalog in Flink, the 
> following exception will be thrown.
> {code}
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2273) 
> ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2367) 
> ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2393) 
> ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.loadFilterHooks(HiveMetaStoreClient.java:250)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:145)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[?:1.8.0_352]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  ~[?:1.8.0_352]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[?:1.8.0_352]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> ~[?:1.8.0_352]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:415)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:82) 
> ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
>  ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:106)
>  ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:66)
>  ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
>  ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
>  ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
>   at 
> org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:435)
>  ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
>   

[jira] [Updated] (FLINK-30646) Table Store Hive catalog throws ClassNotFoundException when custom hive-site.xml is presented

2023-01-12 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30646:

Description: 
For Hive 2.3.9, if a custom {{hive-site.xml}} is presented in 
{{$HIVE_HOME/conf}}, when creating Table Store Hive catalog in Flink, the 
following exception will be thrown.

{code}
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2273) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2367) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2393) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.loadFilterHooks(HiveMetaStoreClient.java:250)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:145)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_352]
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_352]
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_352]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_352]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:415)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:82) 
~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:106)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:66)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:435)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1426)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1172)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:206)
 ~[flink-sql-client-1.16.0.jar:1.16.0]
... 10 more
{code}

This is because {{hive-default.xml.template}} contains a property named 
{{hive.metastore.filter.hook}}. Its default value is 
{{org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl}}. However 
all Hive packages in Table Store are shaded, so the class loader cannot find 
the original class.

we need to remove relocation of Hive classes.

  was:
For Hive 

[jira] [Created] (FLINK-30646) Table Store Hive catalog throws ClassNotFoundException when custom hive-site.xml is presented

2023-01-12 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30646:
---

 Summary: Table Store Hive catalog throws ClassNotFoundException 
when custom hive-site.xml is presented
 Key: FLINK-30646
 URL: https://issues.apache.org/jira/browse/FLINK-30646
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


For Hive 2.3.9, if a custom {{hive-site.xml}} is presented in 
{{$HIVE_HOME/conf}}, when creating Table Store Hive catalog in Flink, the 
following exception will be thrown.

{code}
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2273) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2367) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2393) 
~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.loadFilterHooks(HiveMetaStoreClient.java:250)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:145)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_352]
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_352]
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_352]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_352]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:415)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:82) 
~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
 ~[flink-table-store-hive-catalog-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:106)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:66)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
 ~[flink-table-store-dist-0.4-SNAPSHOT.jar:0.4-SNAPSHOT]
at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:435)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1426)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1172)
 ~[flink-table-api-java-uber-1.16.0.jar:1.16.0]
at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:206)
 ~[flink-sql-client-1.16.0.jar:1.16.0]
... 10 more
{code}



--
This 

[jira] [Created] (FLINK-30620) Table Store Hive catalog supports specifying custom Hive metastore client

2023-01-10 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30620:
---

 Summary: Table Store Hive catalog supports specifying custom Hive 
metastore client
 Key: FLINK-30620
 URL: https://issues.apache.org/jira/browse/FLINK-30620
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


Currently Hive metastore client class is hard coded in {{HiveCatalog}}, however 
users may want to specify custom Hive metastore client to read and write Hive 
compliant storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30603) CompactActionITCase in table store is unstable

2023-01-10 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30603.
---
Resolution: Fixed

master: b7188bcc46989c66e44f1fb04cd45972e1a6fe50
release-0.3: ad164c32fca33c365192b2025eaea61980d961eb

> CompactActionITCase in table store is unstable
> --
>
> Key: FLINK-30603
> URL: https://issues.apache.org/jira/browse/FLINK-30603
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-table-store/actions/runs/3871130631/jobs/6598625030
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:CompactActionITCase.testStreamingCompact:187 expected:<[+I 
> 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221209]>



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30611) Expire snapshot should be reentrant

2023-01-10 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30611.
---
Resolution: Fixed

master: 929a411f29f4fc76dfead6e716daae56c165724c
release-0.3: 1f0f28c3cf9c680ba9294362f3a4ba128236f8ab

> Expire snapshot should be reentrant
> ---
>
> Key: FLINK-30611
> URL: https://issues.apache.org/jira/browse/FLINK-30611
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> At present, if the file is incomplete, expire will throw an exception.
> However, the snapshot in expire may be incomplete. It can be interrupted and 
> killed suddenly.
> Therefore, we should ensure the safety of expire, make it reentrant, and 
> avoid throwing exceptions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30611) Expire snapshot should be reentrant

2023-01-10 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30611:
---

Assignee: Caizhi Weng

> Expire snapshot should be reentrant
> ---
>
> Key: FLINK-30611
> URL: https://issues.apache.org/jira/browse/FLINK-30611
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> At present, if the file is incomplete, expire will throw an exception.
> However, the snapshot in expire may be incomplete. It can be interrupted and 
> killed suddenly.
> Therefore, we should ensure the safety of expire, make it reentrant, and 
> avoid throwing exceptions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30603) CompactActionITCase in table store is unstable

2023-01-09 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30603:
---

Assignee: Caizhi Weng

> CompactActionITCase in table store is unstable
> --
>
> Key: FLINK-30603
> URL: https://issues.apache.org/jira/browse/FLINK-30603
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Assignee: Caizhi Weng
>Priority: Major
>
> https://github.com/apache/flink-table-store/actions/runs/3871130631/jobs/6598625030
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:CompactActionITCase.testStreamingCompact:187 expected:<[+I 
> 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221209]>



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30589) Snapshot expiration should be skipped in Table Store dedicated writer jobs

2023-01-06 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30589:
---

 Summary: Snapshot expiration should be skipped in Table Store 
dedicated writer jobs
 Key: FLINK-30589
 URL: https://issues.apache.org/jira/browse/FLINK-30589
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


Currently Table Store dedicated writer jobs will also expire snapshots. This 
may cause conflicts when multiple writer jobs are running.

We should expire snapshots only in dedicated compact job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30573) Table Store dedicated compact job may skip some records when checkpoint interval is long

2023-01-06 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30573.
---
Resolution: Fixed

master: 8fdbbd4e8c22a82d79509cab0add4a6aa7331672
release-0.3: 1ba8f372a05d64e7370f7dcbb65b77e23079f8ef

> Table Store dedicated compact job may skip some records when checkpoint 
> interval is long
> 
>
> Key: FLINK-30573
> URL: https://issues.apache.org/jira/browse/FLINK-30573
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.4.0
>
>
> Currently the sink for Table Store dedicated compact job only receives 
> records about what buckets are changed, instead of what files are changed. If 
> the writer is kept open, new files of this bucket may be skipped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30573) Table Store dedicated compact job may skip some records when checkpoint interval is long

2023-01-05 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30573:
---

 Summary: Table Store dedicated compact job may skip some records 
when checkpoint interval is long
 Key: FLINK-30573
 URL: https://issues.apache.org/jira/browse/FLINK-30573
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


Currently the sink for Table Store dedicated compact job only receives records 
about what buckets are changed, instead of what files are changed. If the 
writer is kept open, new files of this bucket may be skipped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30521) Improve `Altering Tables` of Doc

2022-12-28 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30521.
---
Resolution: Fixed

master: 659a5ce4b1b7a45c506391476b3b67396346403f
release-0.3: cfd132362b14ba66b41db3264ca70d34e496cc78

> Improve `Altering Tables` of Doc
> 
>
> Key: FLINK-30521
> URL: https://issues.apache.org/jira/browse/FLINK-30521
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: yuzelin
>Assignee: yuzelin
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.4.0
>
>
> Add more syntax description in the section `Altering Tables`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30521) Improve `Altering Tables` of Doc

2022-12-28 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30521:

Fix Version/s: table-store-0.3.0
   table-store-0.4.0

> Improve `Altering Tables` of Doc
> 
>
> Key: FLINK-30521
> URL: https://issues.apache.org/jira/browse/FLINK-30521
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: yuzelin
>Assignee: yuzelin
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.4.0
>
>
> Add more syntax description in the section `Altering Tables`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30521) Improve `Altering Tables` of Doc

2022-12-28 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30521:
---

Assignee: yuzelin

> Improve `Altering Tables` of Doc
> 
>
> Key: FLINK-30521
> URL: https://issues.apache.org/jira/browse/FLINK-30521
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: yuzelin
>Assignee: yuzelin
>Priority: Major
>  Labels: pull-request-available
>
> Add more syntax description in the section `Altering Tables`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30521) Improve `Altering Tables` of Doc

2022-12-28 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30521:

Affects Version/s: table-store-0.3.0

> Improve `Altering Tables` of Doc
> 
>
> Key: FLINK-30521
> URL: https://issues.apache.org/jira/browse/FLINK-30521
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: yuzelin
>Assignee: yuzelin
>Priority: Major
>  Labels: pull-request-available
>
> Add more syntax description in the section `Altering Tables`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30506) Add documentation for writing Table Store with Spark3

2022-12-28 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30506.
---
Resolution: Fixed

master: 33c30f2e79e85ee15130320d6e712601c1b44d6c
release-0.3: d02276de2fdcf90b50b3cbb1da75055b960704e1

> Add documentation for writing Table Store with Spark3
> -
>
> Key: FLINK-30506
> URL: https://issues.apache.org/jira/browse/FLINK-30506
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.4.0
>
>
> Table Store 0.3 supports writing with Spark3. We need to add documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30506) Add documentation for writing Table Store with Spark3

2022-12-26 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30506:
---

 Summary: Add documentation for writing Table Store with Spark3
 Key: FLINK-30506
 URL: https://issues.apache.org/jira/browse/FLINK-30506
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


Table Store 0.3 supports writing with Spark3. We need to add documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30504) Fix UnsupportedFileSystemSchemeException when writing Table Store on OSS with other engines

2022-12-26 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30504.
---
  Assignee: Caizhi Weng
Resolution: Fixed

master: 929c1110bdbf521ad0a9eb09a65d33f76a2b5990
release-0.3: fe8de4c32c148bb87f5a40649ca2373e88f321d8

> Fix UnsupportedFileSystemSchemeException when writing Table Store on OSS with 
> other engines
> ---
>
> Key: FLINK-30504
> URL: https://issues.apache.org/jira/browse/FLINK-30504
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.4.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.4.0
>
>
> Currently when writing Table Store tables on OSS with other engines (for 
> example Spark), the following exception will occur.
> {code}
> 22/12/23 17:54:12 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3) 
> (core-1-1.c-c9f1b761c8946269.cn-huhehaote.emr.aliyuncs.com executor 2): 
> java.lang.RuntimeException: Failed to find latest snapshot id
>   at 
> org.apache.flink.table.store.file.utils.SnapshotManager.latestSnapshotId(SnapshotManager.java:81)
>   at 
> org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.scanExistingFileMetas(AbstractFileStoreWrite.java:87)
>   at 
> org.apache.flink.table.store.file.operation.KeyValueFileStoreWrite.createWriter(KeyValueFileStoreWrite.java:113)
>   at 
> org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.createWriter(AbstractFileStoreWrite.java:227)
>   at 
> org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.lambda$getWriter$1(AbstractFileStoreWrite.java:217)
>   at java.util.HashMap.computeIfAbsent(HashMap.java:1128)
>   at 
> org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.getWriter(AbstractFileStoreWrite.java:217)
>   at 
> org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.write(AbstractFileStoreWrite.java:106)
>   at 
> org.apache.flink.table.store.table.sink.TableWriteImpl.write(TableWriteImpl.java:63)
>   at 
> org.apache.flink.table.store.spark.SparkWrite$WriteRecords.call(SparkWrite.java:124)
>   at 
> org.apache.flink.table.store.spark.SparkWrite$WriteRecords.call(SparkWrite.java:105)
>   at 
> org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.$anonfun$mapValues$3(PairRDDFunctions.scala:752)
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.reduceLeft(TraversableOnce.scala:237)
>   at scala.collection.TraversableOnce.reduceLeft$(TraversableOnce.scala:220)
>   at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1431)
>   at org.apache.spark.rdd.RDD.$anonfun$reduce$2(RDD.scala:1097)
>   at org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2322)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>   at org.apache.spark.scheduler.Task.run(Task.scala:136)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:750)
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
> Could not find a file system implementation for scheme 'oss'. The scheme is 
> directly supported by Flink through the following plugin(s): 
> flink-oss-fs-hadoop. Please ensure that each plugin resides within its own 
> subfolder within the plugins directory. See 
> https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/
>  for more information. If you want to use a Hadoop file system for that 
> scheme, please add the scheme to the configuration 
> fs.allowed-fallback-filesystems. For a full list of supported file systems, 
> please see 
> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
>   at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:515)
>   at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409)
>   at 

[jira] [Created] (FLINK-30504) Fix UnsupportedFileSystemSchemeException when writing Table Store on OSS with other engines

2022-12-26 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30504:
---

 Summary: Fix UnsupportedFileSystemSchemeException when writing 
Table Store on OSS with other engines
 Key: FLINK-30504
 URL: https://issues.apache.org/jira/browse/FLINK-30504
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.4.0
Reporter: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.4.0


Currently when writing Table Store tables on OSS with other engines (for 
example Spark), the following exception will occur.

{code}
22/12/23 17:54:12 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3) 
(core-1-1.c-c9f1b761c8946269.cn-huhehaote.emr.aliyuncs.com executor 2): 
java.lang.RuntimeException: Failed to find latest snapshot id
  at 
org.apache.flink.table.store.file.utils.SnapshotManager.latestSnapshotId(SnapshotManager.java:81)
  at 
org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.scanExistingFileMetas(AbstractFileStoreWrite.java:87)
  at 
org.apache.flink.table.store.file.operation.KeyValueFileStoreWrite.createWriter(KeyValueFileStoreWrite.java:113)
  at 
org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.createWriter(AbstractFileStoreWrite.java:227)
  at 
org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.lambda$getWriter$1(AbstractFileStoreWrite.java:217)
  at java.util.HashMap.computeIfAbsent(HashMap.java:1128)
  at 
org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.getWriter(AbstractFileStoreWrite.java:217)
  at 
org.apache.flink.table.store.file.operation.AbstractFileStoreWrite.write(AbstractFileStoreWrite.java:106)
  at 
org.apache.flink.table.store.table.sink.TableWriteImpl.write(TableWriteImpl.java:63)
  at 
org.apache.flink.table.store.spark.SparkWrite$WriteRecords.call(SparkWrite.java:124)
  at 
org.apache.flink.table.store.spark.SparkWrite$WriteRecords.call(SparkWrite.java:105)
  at 
org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070)
  at 
org.apache.spark.rdd.PairRDDFunctions.$anonfun$mapValues$3(PairRDDFunctions.scala:752)
  at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
  at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
  at scala.collection.Iterator.foreach(Iterator.scala:943)
  at scala.collection.Iterator.foreach$(Iterator.scala:943)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
  at scala.collection.TraversableOnce.reduceLeft(TraversableOnce.scala:237)
  at scala.collection.TraversableOnce.reduceLeft$(TraversableOnce.scala:220)
  at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1431)
  at org.apache.spark.rdd.RDD.$anonfun$reduce$2(RDD.scala:1097)
  at org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2322)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  at org.apache.spark.scheduler.Task.run(Task.scala:136)
  at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could 
not find a file system implementation for scheme 'oss'. The scheme is directly 
supported by Flink through the following plugin(s): flink-oss-fs-hadoop. Please 
ensure that each plugin resides within its own subfolder within the plugins 
directory. See 
https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/
 for more information. If you want to use a Hadoop file system for that scheme, 
please add the scheme to the configuration fs.allowed-fallback-filesystems. For 
a full list of supported file systems, please see 
https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
  at 
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:515)
  at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409)
  at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274)
  at 
org.apache.flink.table.store.file.utils.SnapshotManager.findLatest(SnapshotManager.java:164)
  at 
org.apache.flink.table.store.file.utils.SnapshotManager.latestSnapshotId(SnapshotManager.java:79)
  ... 30 more
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30247) Introduce Time Travel reading for table store

2022-12-26 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30247.
---
Resolution: Fixed

master: 7e9cf0dd907c84b67850503ff61e27944363d06b
release-0.3: b7d7518e07cfdd0e57052e384fc56f50099b8a48

> Introduce Time Travel reading for table store
> -
>
> Key: FLINK-30247
> URL: https://issues.apache.org/jira/browse/FLINK-30247
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> For example:
> - SELECT * FROM T /*+ OPTIONS('as-of-timestamp-mills'='121230')*/; Read 
> snapshot specific by commit time.
> - SELECT * FROM T /*+ OPTIONS('as-of-snapshot'='12')*/; Read snapshot 
> specific by snapshot id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30247) Introduce Time Travel reading for table store

2022-12-26 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30247:
---

Assignee: Shammon

> Introduce Time Travel reading for table store
> -
>
> Key: FLINK-30247
> URL: https://issues.apache.org/jira/browse/FLINK-30247
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> For example:
> - SELECT * FROM T /*+ OPTIONS('as-of-timestamp-mills'='121230')*/; Read 
> snapshot specific by commit time.
> - SELECT * FROM T /*+ OPTIONS('as-of-snapshot'='12')*/; Read snapshot 
> specific by snapshot id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27246) Code of method "processElement(Lorg/apache/flink/streaming/runtime/streamrecord/StreamRecord;)V" of class "HashAggregateWithKeys$9211" grows beyond 64 KB

2022-12-25 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17651905#comment-17651905
 ] 

Caizhi Weng commented on FLINK-27246:
-

Hi [~KristoffSC]!

Sorry for the long delay as I'm mainly working on the table store module.

The idea looks good to me and you can continue working on your fix.

Your fix seems to rewrite both the {{WHILE}} statements and the {{IF}} 
statements. For {{IF}} statements we already have {{IfStatementRewriter}}. Can 
we remove the special {{IfStatementRewriter}} after your fix is implemented? Or 
are they working on different things? If so, what's the difference?

> Code of method 
> "processElement(Lorg/apache/flink/streaming/runtime/streamrecord/StreamRecord;)V"
>  of class "HashAggregateWithKeys$9211" grows beyond 64 KB
> -
>
> Key: FLINK-27246
> URL: https://issues.apache.org/jira/browse/FLINK-27246
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.16.0, 1.15.3
>Reporter: Maciej Bryński
>Assignee: Krzysztof Chmielewski
>Priority: Major
> Attachments: endInput_falseFilter9123_split9704.txt
>
>
> I think this bug should get fixed in 
> https://issues.apache.org/jira/browse/FLINK-23007
> Unfortunately I spotted it on Flink 1.14.3
> {code}
> java.lang.RuntimeException: Could not instantiate generated class 
> 'HashAggregateWithKeys$9211'
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:85)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:40)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:81)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:198)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:63)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:666)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
> ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at java.lang.Thread.run(Unknown Source) ~[?:?]
> Caused by: org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:83)
>  ~[flink-table_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   ... 11 more
> Caused by: 
> org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException:
>  org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859)
>  ~[flink-dist_2.12-1.14.3-stream1.jar:1.14.3-stream1]
>   at 
> 

[jira] [Closed] (FLINK-30458) Refactor Table Store Documentation

2022-12-21 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30458.
---
Resolution: Fixed

master: 4c607a7356a5fd43364f93d70264d1ede98113ab

> Refactor Table Store Documentation
> --
>
> Key: FLINK-30458
> URL: https://issues.apache.org/jira/browse/FLINK-30458
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Currently the documents of table store are messy and are lacking basic 
> concepts. This ticket refactors table store documentation and at the mean 
> time adding basic concepts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30453) Fix 'can't find CatalogFactory' error when using FLINK sql-client to add table store bundle jar

2022-12-20 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30453:
---

Assignee: yuzelin

> Fix 'can't find CatalogFactory' error when using FLINK sql-client to add 
> table store bundle jar
> ---
>
> Key: FLINK-30453
> URL: https://issues.apache.org/jira/browse/FLINK-30453
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Reporter: yuzelin
>Assignee: yuzelin
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> The FLINK 1.16 has introduced new mechanism to allow passing a ClassLoader to 
> EnvironmentSettings[FLINK-15635|https://issues.apache.org/jira/browse/FLINK-15635],
>   and the Flink SQL client will pass a `ClientWrapperClassLoader`, making 
> that table store CatalogFactory can't be found if it is added by 'ADD JAR' 
> statement through SQL Client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30453) Fix 'can't find CatalogFactory' error when using FLINK sql-client to add table store bundle jar

2022-12-20 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30453.
---
Resolution: Fixed

master: 7e0d55ff3dc9fd48455b17d9a439647b0554d020

> Fix 'can't find CatalogFactory' error when using FLINK sql-client to add 
> table store bundle jar
> ---
>
> Key: FLINK-30453
> URL: https://issues.apache.org/jira/browse/FLINK-30453
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Reporter: yuzelin
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> The FLINK 1.16 has introduced new mechanism to allow passing a ClassLoader to 
> EnvironmentSettings[FLINK-15635|https://issues.apache.org/jira/browse/FLINK-15635],
>   and the Flink SQL client will pass a `ClientWrapperClassLoader`, making 
> that table store CatalogFactory can't be found if it is added by 'ADD JAR' 
> statement through SQL Client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30458) Refactor Table Store Documentation

2022-12-20 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30458:
---

 Summary: Refactor Table Store Documentation
 Key: FLINK-30458
 URL: https://issues.apache.org/jira/browse/FLINK-30458
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


Currently the documents of table store are messy and are lacking basic 
concepts. This ticket refactors table store documentation and at the mean time 
adding basic concepts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30212) Introduce a TableStoreCompactJob class for users to submit compact jobs in Table Store

2022-12-18 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30212.
---
Resolution: Fixed

master: 15af711a1bf50b2d612b439a27225c8ad42acb1d

> Introduce a TableStoreCompactJob class for users to submit compact jobs in 
> Table Store
> --
>
> Key: FLINK-30212
> URL: https://issues.apache.org/jira/browse/FLINK-30212
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Currently Flink does not support SQL statements for compacting a table. So in 
> this ticket we create a TableStoreCompactJob class for users to submit 
> compact jobs in Table Store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30204) Table Store support separated compact jobs

2022-12-18 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30204.
---
Resolution: Fixed

> Table Store support separated compact jobs
> --
>
> Key: FLINK-30204
> URL: https://issues.apache.org/jira/browse/FLINK-30204
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: table-store-0.3.0
>
>
> Currently table store sinks will write and compact data files from the same 
> job. While this implementation is enough and more economical for most users, 
> some user may expect higher or more steady write throughput.
> We decided to support creating separated compact jobs for Table Store. This 
> will bring us the following advantages:
> * Write jobs can concentrate only on writing files. Their throughput will be 
> higher and more steady.
> * By creating only one compact job for each table, no commit conflicts will 
> occur.
> The structure of a separated compact job is sketched out as follows:
> * There should be three vertices in a compact job. One source vertex, one 
> sink (compactor) vertex and one commit vertex.
> * The source vertex is responsible for generating records containing 
> partitions and buckets to be compacted.
> * The sink vertex accepts records containing partitions and buckets, and 
> compact these buckets.
> * The commit vertex commit the changes from the sink vertex. It is possible 
> that the user mistakenly creates other compact jobs so commit conflicts may 
> still occur. However as compact changes are optional, this commit vertex will 
> commit changes in an at-most-once style.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30410) Rename 'full' to 'latest-full' and 'compacted' to 'compacted-full' for scan.mode in Table Store

2022-12-18 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30410.
---
Resolution: Fixed

master: e6156e2f2998a5c7227f0a40512f03312f041e0a

> Rename 'full' to 'latest-full' and 'compacted' to 'compacted-full' for 
> scan.mode in Table Store
> ---
>
> Key: FLINK-30410
> URL: https://issues.apache.org/jira/browse/FLINK-30410
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> As discussed in the [dev mailing 
> list|https://lists.apache.org/thread/t76f0ofl6k7mlvqotp57ot10x8o1x90p], we're 
> going to rename some values for {{scan.mode}} in Table Store.
> Specifically, {{full}} will be renamed to {{latest-full}} and {{compacted}} 
> will be renamed to {{compacted-full}}, so user can understand that a full 
> snapshot will always be produced, no matter for a batch job or a streaming 
> job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30410) Rename 'full' to 'latest-full' and 'compacted' to 'compacted-full' for scan.mode in Table Store

2022-12-14 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30410:
---

 Summary: Rename 'full' to 'latest-full' and 'compacted' to 
'compacted-full' for scan.mode in Table Store
 Key: FLINK-30410
 URL: https://issues.apache.org/jira/browse/FLINK-30410
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


As discussed in the [dev mailing 
list|https://lists.apache.org/thread/t76f0ofl6k7mlvqotp57ot10x8o1x90p], we're 
going to rename some values for {{scan.mode}} in Table Store.

Specifically, {{full}} will be renamed to {{latest-full}} and {{compacted}} 
will be renamed to {{compacted-full}}, so user can understand that a full 
snapshot will always be produced, no matter for a batch job or a streaming job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30398) Introduce S3 support for table store

2022-12-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30398.
---
Resolution: Fixed

master: 54fd3ab2d16999d628faf4ff62c701b12ca20725

> Introduce S3 support for table store
> 
>
> Key: FLINK-30398
> URL: https://issues.apache.org/jira/browse/FLINK-30398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> S3 contains a large number of dependencies, which can easily lead to class 
> conflicts. We need a plugin mechanism to load the corresponding jars through 
> the classloader.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30211) Introduce CompactStoreSink for compact jobs in Table Store

2022-12-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30211.
---
Resolution: Fixed

master: 422c409b36efcd0e75fdde61b0601f405b18e3ef

> Introduce CompactStoreSink for compact jobs in Table Store
> --
>
> Key: FLINK-30211
> URL: https://issues.apache.org/jira/browse/FLINK-30211
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> This ticket introduces the {{CompactStoreSink}} for compact jobs in Table 
> Store.
> The behavior of {{CompactStoreSink}} is sketched as follows: This sink 
> accepts records containing partitions and buckets to compact and perform 
> compaction on these buckets.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30395) Refactor module name and documentation for filesystems

2022-12-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30395.
---
Resolution: Fixed

master: 0620350d292600b11ba97779bfdb8869b87597db

> Refactor module name and documentation for filesystems
> --
>
> Key: FLINK-30395
> URL: https://issues.apache.org/jira/browse/FLINK-30395
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> * flink-table-store-filesystem => flink-table-store-filesystems
> * flink-table-store-fs-oss-hadoop => flink-table-store-oss
> * introduce a new page for oss only



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30210) Refactor StoreCompactOperator to accept records containing partitions and buckets in Table Store

2022-12-13 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30210.
---
Resolution: Fixed

master: 2073bd66a4e41ac47a98b0c0370b0ccdeb516c62, 
333bb8dad697d65e70a4d672748bc1e55d7ded03

> Refactor StoreCompactOperator to accept records containing partitions and 
> buckets in Table Store
> 
>
> Key: FLINK-30210
> URL: https://issues.apache.org/jira/browse/FLINK-30210
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> In this ticket we refactor StoreCompactOperator to accept records containing 
> partitions and buckets in Table Store. The old {{ALTER TABLE COMPACT}} will 
> also be disabled in this ticket due to various restrictions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30209) Introduce CompactFileStoreSource for compact jobs of Table Store

2022-12-12 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30209.
---
Resolution: Fixed

master: 20dd568f3c2a9b2afd4cb5eb512cd24976602e27

> Introduce CompactFileStoreSource for compact jobs of Table Store
> 
>
> Key: FLINK-30209
> URL: https://issues.apache.org/jira/browse/FLINK-30209
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> In this ticket we create the {{CompactFileStoreSource}} Flink source for 
> separated compact jobs.
> The behavior of this source is sketched as follows:
> * For batch compact jobs, this source produces records containing all 
> partitions and buckets of the current table.
> * For streaming compact jobs, this source produces records containing all 
> modified partitions and buckets of a snapshot. This source will also monitor 
> on newly created snapshots.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30333) Supports lookup a partial-update table

2022-12-09 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30333.
---
Resolution: Fixed

master: e33f36ff07d13e55d26da81ebbc916d1fe0c2e15

> Supports lookup a partial-update table
> --
>
> Key: FLINK-30333
> URL: https://issues.apache.org/jira/browse/FLINK-30333
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> The lookup uses streaming read for reading table. (In TableStreamingReader)
> - We should support lookup a partial-update table with full compaction.
> - But partial-update table without full compaction, we should throw exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30294) Change table property key 'log.scan' to 'startup.mode' and add a default startup mode in Table Store

2022-12-05 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30294.
---
Resolution: Fixed

master: dadd735d362d355d60b737bf2c9669cb2fc880f6

> Change table property key 'log.scan' to 'startup.mode' and add a default 
> startup mode in Table Store
> 
>
> Key: FLINK-30294
> URL: https://issues.apache.org/jira/browse/FLINK-30294
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
>
> We're introducing time-travel reading of Table Store for batch jobs. However 
> this reading mode is quite similar to the "from-timestamp" startup mode for 
> streaming jobs, just that "from-timestamp" streaming jobs only consume 
> incremental data but not history data.
> We can support startup mode for both batch and streaming jobs. For batch 
> jobs, "from-timestamp" startup mode will produce all records from the last 
> snapshot before the specified timestamp. For streaming jobs the behavior 
> doesn't change.
> Previously, in order to use "from-timestamp" startup mode, users will have to 
> specify "log.scan" and also "log.scan.timestamp-millis", which is a little 
> inconvenient. We can introduce a "default" startup mode and its behavior will 
> base on the execution environment and other configurations. In this way, to 
> use "from-timestamp" startup mode, it is enough for users to specify just 
> "startup.timestamp-millis".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30294) Change table property key 'log.scan' to 'startup.mode' and add a default startup mode in Table Store

2022-12-04 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30294:
---

 Summary: Change table property key 'log.scan' to 'startup.mode' 
and add a default startup mode in Table Store
 Key: FLINK-30294
 URL: https://issues.apache.org/jira/browse/FLINK-30294
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng


We're introducing time-travel reading of Table Store for batch jobs. However 
this reading mode is quite similar to the "from-timestamp" startup mode for 
streaming jobs, just that "from-timestamp" streaming jobs only consume 
incremental data but not history data.

We can support startup mode for both batch and streaming jobs. For batch jobs, 
"from-timestamp" startup mode will produce all records from the last snapshot 
before the specified timestamp. For streaming jobs the behavior doesn't change.

Previously, in order to use "from-timestamp" startup mode, users will have to 
specify "log.scan" and also "log.scan.timestamp-millis", which is a little 
inconvenient. We can introduce a "default" startup mode and its behavior will 
base on the execution environment and other configurations. In this way, to use 
"from-timestamp" startup mode, it is enough for users to specify just 
"startup.timestamp-millis".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30206) Allow FileStoreScan to read incremental changes from OVERWRITE snapshot in Table Store

2022-11-29 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30206.
---
Resolution: Fixed

master: c7bc1fb9d1d656dd93fafe66dbef185b560a33c2

> Allow FileStoreScan to read incremental changes from OVERWRITE snapshot in 
> Table Store
> --
>
> Key: FLINK-30206
> URL: https://issues.apache.org/jira/browse/FLINK-30206
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Currently {{AbstractFileStoreScan}} can only read incremental changes from 
> APPEND snapshots. However in OVERWRITE snapshots, users will also append new 
> records to table. These changes must be discovered by compact job source so 
> that the overwritten partition can be compacted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30223) Refactor Lock to provide Lock.Factory

2022-11-27 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30223.
---
Resolution: Fixed

master: 6886303b2f482f2f31c7b98221691a650c1e67d3

> Refactor Lock to provide Lock.Factory
> -
>
> Key: FLINK-30223
> URL: https://issues.apache.org/jira/browse/FLINK-30223
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> For the core, it should not see too many Flink Table concepts, such as 
> database and tableName. It only needs to create a Lock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30164) Expose BucketComputer from SupportsWrite

2022-11-27 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17639748#comment-17639748
 ] 

Caizhi Weng commented on FLINK-30164:
-

master: d8eb796f035f35e1ac85ff3f657452dd2a41e644

> Expose BucketComputer from SupportsWrite
> 
>
> Key: FLINK-30164
> URL: https://issues.apache.org/jira/browse/FLINK-30164
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> When other engines dock with Sink, they need to know the corresponding bucket 
> rules before they can be correctly distributed to each bucket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30164) Expose BucketComputer from SupportsWrite

2022-11-27 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30164:

Release Note:   (was: master: d8eb796f035f35e1ac85ff3f657452dd2a41e644)

> Expose BucketComputer from SupportsWrite
> 
>
> Key: FLINK-30164
> URL: https://issues.apache.org/jira/browse/FLINK-30164
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> When other engines dock with Sink, they need to know the corresponding bucket 
> rules before they can be correctly distributed to each bucket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30164) Expose BucketComputer from SupportsWrite

2022-11-27 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30164.
---
Release Note: master: d8eb796f035f35e1ac85ff3f657452dd2a41e644
  Resolution: Fixed

> Expose BucketComputer from SupportsWrite
> 
>
> Key: FLINK-30164
> URL: https://issues.apache.org/jira/browse/FLINK-30164
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> When other engines dock with Sink, they need to know the corresponding bucket 
> rules before they can be correctly distributed to each bucket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30212) Introduce a TableStoreCompactJob class for users to submit compact jobs in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30212:
---

 Summary: Introduce a TableStoreCompactJob class for users to 
submit compact jobs in Table Store
 Key: FLINK-30212
 URL: https://issues.apache.org/jira/browse/FLINK-30212
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


Currently Flink does not support SQL statements for compacting a table. So in 
this ticket we create a TableStoreCompactJob class for users to submit compact 
jobs in Table Store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30211) Introduce CompactStoreSink for compact jobs in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30211:
---

 Summary: Introduce CompactStoreSink for compact jobs in Table Store
 Key: FLINK-30211
 URL: https://issues.apache.org/jira/browse/FLINK-30211
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


This ticket introduces the {{CompactStoreSink}} for compact jobs in Table Store.

The behavior of {{CompactStoreSink}} is sketched as follows: This sink accepts 
records containing partitions and buckets to compact and perform compaction on 
these buckets.
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30210) Refactor StoreCompactOperator to accept records containing partitions and buckets in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30210:
---

 Summary: Refactor StoreCompactOperator to accept records 
containing partitions and buckets in Table Store
 Key: FLINK-30210
 URL: https://issues.apache.org/jira/browse/FLINK-30210
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


In this ticket we refactor StoreCompactOperator to accept records containing 
partitions and buckets in Table Store. The old {{ALTER TABLE COMPACT}} will 
also be disabled in this ticket due to various restrictions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30209) Introduce CompactFileStoreSource for compact jobs of Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30209:
---

 Summary: Introduce CompactFileStoreSource for compact jobs of 
Table Store
 Key: FLINK-30209
 URL: https://issues.apache.org/jira/browse/FLINK-30209
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


In this ticket we create the {{CompactFileStoreSource}} Flink source for 
separated compact jobs.

The behavior of this source is sketched as follows:
* For batch compact jobs, this source produces records containing all 
partitions and buckets of the current table.
* For streaming compact jobs, this source produces records containing all 
modified partitions and buckets of a snapshot. This source will also monitor on 
newly created snapshots.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30207) Move split initialization and discovery logic fully into SnapshotEnumerator in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30207:
---

 Summary: Move split initialization and discovery logic fully into 
SnapshotEnumerator in Table Store
 Key: FLINK-30207
 URL: https://issues.apache.org/jira/browse/FLINK-30207
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


It is possible that the separated compact job is started long after the write 
jobs. so compact job sources need a special split initialization logic: we will 
find the latest COMPACT snapshot, and start compacting right after this 
snapshot.

However, split initialization logic are currently coded into 
{{FileStoreSource}}. We should extract these logic into {{SnapshotEnumerator}} 
so that we can create our special {{SnapshotEnumerator}} for compact sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30206) Allow FileStoreScan to read incremental changes from OVERWRITE snapshot in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30206:
---

 Summary: Allow FileStoreScan to read incremental changes from 
OVERWRITE snapshot in Table Store
 Key: FLINK-30206
 URL: https://issues.apache.org/jira/browse/FLINK-30206
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


Currently {{AbstractFileStoreScan}} can only read incremental changes from 
APPEND snapshots. However in OVERWRITE snapshots, users will also append new 
records to table. These changes must be discovered by compact job source so 
that the overwritten partition can be compacted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30204) Table Store support separated compact jobs

2022-11-25 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-30204:

Description: 
Currently table store sinks will write and compact data files from the same 
job. While this implementation is enough and more economical for most users, 
some user may expect higher or more steady write throughput.

We decided to support creating separated compact jobs for Table Store. This 
will bring us the following advantages:
* Write jobs can concentrate only on writing files. Their throughput will be 
higher and more steady.
* By creating only one compact job for each table, no commit conflicts will 
occur.

The structure of a separated compact job is sketched out as follows:
* There should be three vertices in a compact job. One source vertex, one sink 
(compactor) vertex and one commit vertex.
* The source vertex is responsible for generating records containing partitions 
and buckets to be compacted.
* The sink vertex accepts records containing partitions and buckets, and 
compact these buckets.
* The commit vertex commit the changes from the sink vertex. It is possible 
that the user mistakenly creates other compact jobs so commit conflicts may 
still occur. However as compact changes are optional, this commit vertex will 
commit changes in an at-most-once style.

  was:
Currently table store sinks will write and compact data files from the same 
job. While this implementation is enough and more economical for most users, 
some user may expect higher or more steady write throughput.

We decided to support creating separated compact jobs for Table Store. This 
will bring us the following advantages:
* Write jobs can concentrate only on writing files. Their throughput will be 
higher and more steady.
* By creating only one compact job for each table, no commit conflicts will 
occur.


> Table Store support separated compact jobs
> --
>
> Key: FLINK-30204
> URL: https://issues.apache.org/jira/browse/FLINK-30204
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: table-store-0.3.0
>
>
> Currently table store sinks will write and compact data files from the same 
> job. While this implementation is enough and more economical for most users, 
> some user may expect higher or more steady write throughput.
> We decided to support creating separated compact jobs for Table Store. This 
> will bring us the following advantages:
> * Write jobs can concentrate only on writing files. Their throughput will be 
> higher and more steady.
> * By creating only one compact job for each table, no commit conflicts will 
> occur.
> The structure of a separated compact job is sketched out as follows:
> * There should be three vertices in a compact job. One source vertex, one 
> sink (compactor) vertex and one commit vertex.
> * The source vertex is responsible for generating records containing 
> partitions and buckets to be compacted.
> * The sink vertex accepts records containing partitions and buckets, and 
> compact these buckets.
> * The commit vertex commit the changes from the sink vertex. It is possible 
> that the user mistakenly creates other compact jobs so commit conflicts may 
> still occur. However as compact changes are optional, this commit vertex will 
> commit changes in an at-most-once style.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30205) Modify compact interface for TableWrite and FileStoreWrite to support normal compaction in Table Store

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30205:
---

 Summary: Modify compact interface for TableWrite and 
FileStoreWrite to support normal compaction in Table Store
 Key: FLINK-30205
 URL: https://issues.apache.org/jira/browse/FLINK-30205
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


Currently the {{compact}} interface in {{TableWrite}} and {{FileStoreWrite}} 
can only trigger full compaction. However a separated compact job should not 
only perform full compaction, but also perform normal compaction once in a 
while, just like what the current Table Store sinks do.

We need to modify compact interface for TableWrite and FileStoreWrite to 
support normal compaction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30204) Table Store support separated compact jobs

2022-11-25 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30204:
---

 Summary: Table Store support separated compact jobs
 Key: FLINK-30204
 URL: https://issues.apache.org/jira/browse/FLINK-30204
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0


Currently table store sinks will write and compact data files from the same 
job. While this implementation is enough and more economical for most users, 
some user may expect higher or more steady write throughput.

We decided to support creating separated compact jobs for Table Store. This 
will bring us the following advantages:
* Write jobs can concentrate only on writing files. Their throughput will be 
higher and more steady.
* By creating only one compact job for each table, no commit conflicts will 
occur.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30143) Table Store fails when temporary directory is a symlink

2022-11-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-30143.
---
Resolution: Duplicate

> Table Store fails when temporary directory is a symlink
> ---
>
> Key: FLINK-30143
> URL: https://issues.apache.org/jira/browse/FLINK-30143
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0, table-store-0.2.2
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: table-store-0.3.0, table-store-0.2.2
>
>
> When {{java.io.tmpdir}} points to a symbolic link, the following exception 
> will be thrown:
> {code}
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.flink.table.store.codegen.CodeGenLoader.getInstance(CodeGenLoader.java:118)
>   at 
> org.apache.flink.table.store.codegen.CodeGenUtils.newProjection(CodeGenUtils.java:47)
>   at 
> org.apache.flink.table.store.table.sink.SinkRecordConverter.(SinkRecordConverter.java:68)
>   at 
> org.apache.flink.table.store.table.sink.SinkRecordConverter.(SinkRecordConverter.java:50)
>   at 
> org.apache.flink.table.store.table.sink.SinkRecordConverterTest.converter(SinkRecordConverterTest.java:99)
>   at 
> org.apache.flink.table.store.table.sink.SinkRecordConverterTest.converter(SinkRecordConverterTest.java:75)
>   at 
> org.apache.flink.table.store.table.sink.SinkRecordConverterTest.testBucket(SinkRecordConverterTest.java:58)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>   at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> 

[jira] [Created] (FLINK-30143) Table Store fails when temporary directory is a symlink

2022-11-22 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-30143:
---

 Summary: Table Store fails when temporary directory is a symlink
 Key: FLINK-30143
 URL: https://issues.apache.org/jira/browse/FLINK-30143
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.3.0, table-store-0.2.2
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: table-store-0.3.0, table-store-0.2.2


When {{java.io.tmpdir}} points to a symbolic link, the following exception will 
be thrown:

{code}
java.lang.ExceptionInInitializerError
at 
org.apache.flink.table.store.codegen.CodeGenLoader.getInstance(CodeGenLoader.java:118)
at 
org.apache.flink.table.store.codegen.CodeGenUtils.newProjection(CodeGenUtils.java:47)
at 
org.apache.flink.table.store.table.sink.SinkRecordConverter.(SinkRecordConverter.java:68)
at 
org.apache.flink.table.store.table.sink.SinkRecordConverter.(SinkRecordConverter.java:50)
at 
org.apache.flink.table.store.table.sink.SinkRecordConverterTest.converter(SinkRecordConverterTest.java:99)
at 
org.apache.flink.table.store.table.sink.SinkRecordConverterTest.converter(SinkRecordConverterTest.java:75)
at 
org.apache.flink.table.store.table.sink.SinkRecordConverterTest.testBucket(SinkRecordConverterTest.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.util.ArrayList.forEach(ArrayList.java:1255)
at 

[jira] [Assigned] (FLINK-30012) A typo in official Table Store document.

2022-11-14 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-30012:
---

Assignee: Caizhi Weng

> A typo in official Table Store document.
> 
>
> Key: FLINK-30012
> URL: https://issues.apache.org/jira/browse/FLINK-30012
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: 1.16.0
> Environment: Flink 1.16.0
>Reporter: Hang HOU
>Assignee: Caizhi Weng
>Priority: Minor
>  Labels: pull-request-available
>
> Found a typo in Rescale Bucket document which is "exiting".
> [Rescale 
> Bucket|https://nightlies.apache.org/flink/flink-table-store-docs-release-0.2/docs/development/rescale-bucket/#rescale-bucket]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   >