[jira] [Commented] (FLINK-30593) Determine restart time on the fly for Autoscaler

2023-08-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759476#comment-17759476
 ] 

Nicholas Jiang commented on FLINK-30593:


[~gyfora], does this reply on the stop time of the scale operations? JobStatus 
only records the start time of the scale operations and doesn't record the stop 
time at present, which cause that we could not get the restart times from the 
scaling history.

> Determine restart time on the fly for Autoscaler
> 
>
> Key: FLINK-30593
> URL: https://issues.apache.org/jira/browse/FLINK-30593
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Currently the autoscaler uses a preconfigured restart time for the job. We 
> should dynamically adjust this on the observered restart times for scale 
> operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-30593) Determine restart time on the fly for Autoscaler

2023-08-27 Thread Nicholas Jiang (Jira)


[ https://issues.apache.org/jira/browse/FLINK-30593 ]


Nicholas Jiang deleted comment on FLINK-30593:


was (Author: nicholasjiang):
[~gyfora], does this rely on the stop time of scale operations? JobStatus only 
records the observed start time of operation and doesn't record the stop time, 
which cause that we could not get the observed restart times for scale 
operations.

> Determine restart time on the fly for Autoscaler
> 
>
> Key: FLINK-30593
> URL: https://issues.apache.org/jira/browse/FLINK-30593
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Currently the autoscaler uses a preconfigured restart time for the job. We 
> should dynamically adjust this on the observered restart times for scale 
> operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30593) Determine restart time on the fly for Autoscaler

2023-08-27 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759452#comment-17759452
 ] 

Nicholas Jiang commented on FLINK-30593:


[~gyfora], does this rely on the stop time of scale operations? JobStatus only 
records the observed start time of operation and doesn't record the stop time, 
which cause that we could not get the observed restart times for scale 
operations.

> Determine restart time on the fly for Autoscaler
> 
>
> Key: FLINK-30593
> URL: https://issues.apache.org/jira/browse/FLINK-30593
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Currently the autoscaler uses a preconfigured restart time for the job. We 
> should dynamically adjust this on the observered restart times for scale 
> operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697846#comment-17697846
 ] 

Nicholas Jiang edited comment on FLINK-31364 at 3/8/23 11:23 AM:
-

[~Ming Li], this ticket is duplicated by FLINK-31224.


was (Author: nicholasjiang):
[~Ming Li], this ticket is duplicated by 
[FLINK-31338|https://issues.apache.org/jira/browse/FLINK-31338].

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697846#comment-17697846
 ] 

Nicholas Jiang commented on FLINK-31364:


[~Ming Li], this ticket is duplicated by 
[FLINK-31338|https://issues.apache.org/jira/browse/FLINK-31338].

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31364:
---
Attachment: image-2023-03-08-19-21-24-801.png

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31364:
---
Attachment: (was: image-2023-03-08-19-21-42-291.png)

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31364:
---
Attachment: image-2023-03-08-19-21-42-291.png

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31364) [Flink] add metrics for TableStore

2023-03-08 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31364:
---
Attachment: (was: image-2023-03-08-19-21-24-801.png)

> [Flink] add metrics for TableStore
> --
>
> Key: FLINK-31364
> URL: https://issues.apache.org/jira/browse/FLINK-31364
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Ming Li
>Priority: Major
>
> Currently, relevant metrics are missing in {{{}Table Store{}}}, such as split 
> consumption speed, commit information statistics, etc. We can add metrics for 
> real-time monitoring of the {{{}Table Store{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31310) Force clear directory no matter what situation in HiveCatalog.dropTable

2023-03-04 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696460#comment-17696460
 ] 

Nicholas Jiang edited comment on FLINK-31310 at 3/4/23 4:02 PM:


[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. The exist 
situation occurs in the `getTable`, not in `dropTable`.

IMO, when no table in hive, users could use the FileSystemCatalog to drop the 
table and clear table directory and HiveCatalog only drops the table in Hive 
and clear table directory via hive metastore client.


was (Author: nicholasjiang):
[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. IMO, when no 
table in hive, users could use the FileSystemCatalog to drop the table and 
clear table directory and HiveCatalog only drops the table in Hive and clear 
table directory via hive metastore client.

> Force clear directory no matter what situation in HiveCatalog.dropTable
> ---
>
> Key: FLINK-31310
> URL: https://issues.apache.org/jira/browse/FLINK-31310
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Currently, if no table in hive, will not clear the table.
> We should clear table directory in any situation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31310) Force clear directory no matter what situation in HiveCatalog.dropTable

2023-03-04 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696460#comment-17696460
 ] 

Nicholas Jiang commented on FLINK-31310:


[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. IMO, when no 
table in hive, users could use the FileSystemCatalog to drop the table and 
clear table directory and HiveCatalog only drops the table in Hive and clear 
table directory via hive metastore client.

> Force clear directory no matter what situation in HiveCatalog.dropTable
> ---
>
> Key: FLINK-31310
> URL: https://issues.apache.org/jira/browse/FLINK-31310
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Currently, if no table in hive, will not clear the table.
> We should clear table directory in any situation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31310) Force clear directory no matter what situation in HiveCatalog.dropTable

2023-03-02 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696048#comment-17696048
 ] 

Nicholas Jiang commented on FLINK-31310:


[~lzljs3620320], could you assign this to me?

> Force clear directory no matter what situation in HiveCatalog.dropTable
> ---
>
> Key: FLINK-31310
> URL: https://issues.apache.org/jira/browse/FLINK-31310
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Currently, if no table in hive, will not clear the table.
> We should clear table directory in any situation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25986) Add FLIP-190 new API methods to python

2023-03-02 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696029#comment-17696029
 ] 

Nicholas Jiang commented on FLINK-25986:


[~dianfu], help to assign to [~qingyue].

> Add FLIP-190 new API methods to python
> --
>
> Key: FLINK-25986
> URL: https://issues.apache.org/jira/browse/FLINK-25986
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Table SQL / API
>Reporter: Francesco Guardiani
>Assignee: Nicholas Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31250) Parquet format supports MULTISET type

2023-02-27 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31250:
---
Description: Parquet format supports ARRAY, MAP and ROW type, doesn't 
support MULTISET type. Parquet format should support MULTISET type.  (was: 
ParquetSchemaConverter supports ARRAY, MAP and ROW type, doesn't support 
MULTISET type. ParquetSchemaConverter should support MULTISET type for parquet 
format.)

> Parquet format supports MULTISET type
> -
>
> Key: FLINK-31250
> URL: https://issues.apache.org/jira/browse/FLINK-31250
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Nicholas Jiang
>Priority: Major
> Fix For: 1.18.0
>
>
> Parquet format supports ARRAY, MAP and ROW type, doesn't support MULTISET 
> type. Parquet format should support MULTISET type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31250) Parquet format supports MULTISET type

2023-02-27 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31250:
---
Summary: Parquet format supports MULTISET type  (was: 
ParquetSchemaConverter supports MULTISET type for parquet format)

> Parquet format supports MULTISET type
> -
>
> Key: FLINK-31250
> URL: https://issues.apache.org/jira/browse/FLINK-31250
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Nicholas Jiang
>Priority: Major
> Fix For: 1.18.0
>
>
> ParquetSchemaConverter supports ARRAY, MAP and ROW type, doesn't support 
> MULTISET type. ParquetSchemaConverter should support MULTISET type for 
> parquet format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31250) ParquetSchemaConverter supports MULTISET type for parquet format

2023-02-27 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31250:
--

 Summary: ParquetSchemaConverter supports MULTISET type for parquet 
format
 Key: FLINK-31250
 URL: https://issues.apache.org/jira/browse/FLINK-31250
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.18.0
Reporter: Nicholas Jiang
 Fix For: 1.18.0


ParquetSchemaConverter supports ARRAY, MAP and ROW type, doesn't support 
MULTISET type. ParquetSchemaConverter should support MULTISET type for parquet 
format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31232) Parquet format supports MULTISET type for Table Store

2023-02-26 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31232:
--

 Summary: Parquet format supports MULTISET type for Table Store
 Key: FLINK-31232
 URL: https://issues.apache.org/jira/browse/FLINK-31232
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Nicholas Jiang


Parquet format should support MULTISET type for Table Store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31224) Add metrics for flink table store

2023-02-25 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693489#comment-17693489
 ] 

Nicholas Jiang commented on FLINK-31224:


[~zhangjun], could you firstly add the design of the metric group and 
corresponding metric of writer into the description? After the design aggrement 
of the design, you could push a pull request.

> Add metrics for flink table store
> -
>
> Key: FLINK-31224
> URL: https://issues.apache.org/jira/browse/FLINK-31224
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.1
>Reporter: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.4.0
>
>
> Add metrics for flink table store



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22793) HybridSource Table Implementation

2023-02-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693322#comment-17693322
 ] 

Nicholas Jiang commented on FLINK-22793:


[~thw] , of course could assign to [~lemonjing].

> HybridSource Table Implementation
> -
>
> Key: FLINK-22793
> URL: https://issues.apache.org/jira/browse/FLINK-22793
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Nicholas Jiang
>Assignee: Nicholas Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31040) Looping pattern notFollowedBy at end missing an element

2023-02-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693321#comment-17693321
 ] 

Nicholas Jiang commented on FLINK-31040:


[~Juntao Hu], [~martijnvisser], thanks for the reporter. Looping pattern 
notFollowedBy at end indeed misses an element. Could you like to fix the 
missing?

> Looping pattern notFollowedBy at end missing an element
> ---
>
> Key: FLINK-31040
> URL: https://issues.apache.org/jira/browse/FLINK-31040
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Juntao Hu
>Priority: Major
>
> Pattern: begin("A", 
> SKIP_TO_NEXT).oneOrMore().consecutive().notFollowedBy("B").within(Time.milliseconds(3))
> Sequence:  will produce results 
> [a1, a2], [a2, a3], [a3], which obviously should be [a1, a2, a3], [a2, a3, 
> a4], [a3, a4], [a4].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31194) Introduces savepoint mechanism of Table Store

2023-02-22 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31194:
---
Summary: Introduces savepoint mechanism of Table Store  (was: ntroduces 
savepoint mechanism of Table Store)

> Introduces savepoint mechanism of Table Store
> -
>
> Key: FLINK-31194
> URL: https://issues.apache.org/jira/browse/FLINK-31194
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Disaster Recovery is very much mission critical for any software. Especially 
> when it comes to data systems, the impact could be very serious leading to 
> delay in business decisions or even wrong business decisions at times. Flink 
> Table Store could introduce savepoint mechanism to assist users in recovering 
> data from a previous state.
> As the name suggest, "savepoint" saves the table as of the snapshot, so that 
> it lets you restore the table to this savepoint at a later point in snapshot 
> if need be. Care is taken to ensure cleaner will not clean up any files that 
> are savepointed. On similar lines, savepoint cannot be triggered on a 
> snapshot that is already cleaned up. In simpler terms, this is synonymous to 
> taking a backup, just that we don't make a new copy of the table, but just 
> save the state of the table elegantly so that we can restore it later when in 
> need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31194) ntroduces savepoint mechanism of Table Store

2023-02-22 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31194:
--

 Summary: ntroduces savepoint mechanism of Table Store
 Key: FLINK-31194
 URL: https://issues.apache.org/jira/browse/FLINK-31194
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.4.0


Disaster Recovery is very much mission critical for any software. Especially 
when it comes to data systems, the impact could be very serious leading to 
delay in business decisions or even wrong business decisions at times. Flink 
Table Store could introduce savepoint mechanism to assist users in recovering 
data from a previous state.

As the name suggest, "savepoint" saves the table as of the snapshot, so that it 
lets you restore the table to this savepoint at a later point in snapshot if 
need be. Care is taken to ensure cleaner will not clean up any files that are 
savepointed. On similar lines, savepoint cannot be triggered on a snapshot that 
is already cleaned up. In simpler terms, this is synonymous to taking a backup, 
just that we don't make a new copy of the table, but just save the state of the 
table elegantly so that we can restore it later when in need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31190) Supports Spark call procedure command on Table Store

2023-02-22 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692494#comment-17692494
 ] 

Nicholas Jiang commented on FLINK-31190:


[~lzljs3620320], could you assign this ticket to me? I would like to support 
Spark call procedure command which refers to the implementation of Hudi and 
Iceberg.

> Supports Spark call procedure command on Table Store
> 
>
> Key: FLINK-31190
> URL: https://issues.apache.org/jira/browse/FLINK-31190
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> At present Hudi and Iceberg supports the Spark call procedure command to 
> execute the table service action etc. Flink Table Store could also support 
> Spark call procedure command to run compaction etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31190) Supports Spark call procedure command on Table Store

2023-02-22 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31190:
--

 Summary: Supports Spark call procedure command on Table Store
 Key: FLINK-31190
 URL: https://issues.apache.org/jira/browse/FLINK-31190
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.4.0


At present Hudi and Iceberg supports the Spark call procedure command to 
execute the table service action etc. Flink Table Store could also support 
Spark call procedure command to run compaction etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30723) Introduce filter pushdown for parquet format

2023-02-19 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691013#comment-17691013
 ] 

Nicholas Jiang commented on FLINK-30723:


[~zjureel], [~lzljs3620320], this is duplicated by FLINK-31076.

> Introduce filter pushdown for parquet format
> 
>
> Key: FLINK-30723
> URL: https://issues.apache.org/jira/browse/FLINK-30723
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Priority: Major
>
> Introduce filter pushdown for parquet format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31009) Add recordCount to snapshot meta

2023-02-15 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689530#comment-17689530
 ] 

Nicholas Jiang commented on FLINK-31009:


[~lzljs3620320], I'm working for this feature. Please assign to me.

> Add recordCount to snapshot meta
> 
>
> Key: FLINK-31009
> URL: https://issues.apache.org/jira/browse/FLINK-31009
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
>
> Record count represents the total number of data records. It is simply added 
> by the number of data records of all files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31076) Supports filter predicate in Parquet format of table store

2023-02-14 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31076:
--

 Summary: Supports filter predicate in Parquet format of table store
 Key: FLINK-31076
 URL: https://issues.apache.org/jira/browse/FLINK-31076
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Reporter: Nicholas Jiang
 Fix For: table-store-0.4.0


Parquet format is the main file format of table store, which doesn't support 
filter predicate. Filter predicate should also support in Parquet format, not 
only the ORC format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31050) Supports IN and NOT IN predicate in ORC format of table store

2023-02-14 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31050:
---
Description: 
Supports IN and NOT IN predicate push down in ORC format of table store.
h4.  

  was:
Supports IN predicate push down in ORC format of table store.
h4.


> Supports IN and NOT IN predicate in ORC format of table store
> -
>
> Key: FLINK-31050
> URL: https://issues.apache.org/jira/browse/FLINK-31050
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Nicholas Jiang
>Priority: Major
>
> Supports IN and NOT IN predicate push down in ORC format of table store.
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31050) Supports IN and NOT IN predicate in ORC format of table store

2023-02-14 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-31050:
---
Summary: Supports IN and NOT IN predicate in ORC format of table store  
(was: Supports IN predicate in ORC format of table store)

> Supports IN and NOT IN predicate in ORC format of table store
> -
>
> Key: FLINK-31050
> URL: https://issues.apache.org/jira/browse/FLINK-31050
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Nicholas Jiang
>Priority: Major
>
> Supports IN predicate push down in ORC format of table store.
> h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31050) Supports IN predicate in ORC format of table store

2023-02-13 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-31050:
--

 Summary: Supports IN predicate in ORC format of table store
 Key: FLINK-31050
 URL: https://issues.apache.org/jira/browse/FLINK-31050
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.4.0
Reporter: Nicholas Jiang


Supports IN predicate push down in ORC format of table store.
h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30414) Add unit test time out when run ci.

2023-01-06 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655416#comment-17655416
 ] 

Nicholas Jiang commented on FLINK-30414:


[~Aiden Gong], it isn't needed to add test timeout for running CI. 
[~lzljs3620320], could you please help to close this ticket?

> Add unit test time out when run ci.
> ---
>
> Key: FLINK-30414
> URL: https://issues.apache.org/jira/browse/FLINK-30414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-12-14-17-59-56-800.png
>
>
> !image-2022-12-14-17-59-56-800.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-30590) Remove set default value manually for table options

2023-01-06 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655414#comment-17655414
 ] 

Nicholas Jiang edited comment on FLINK-30590 at 1/6/23 11:54 AM:
-

[~zjureel], IMO, it's needed for users to reduce the user-defined 
configurations, for example, setting `scan.timestamp-millis` means that the 
scan mode is FROM_TIMESTAMP, not the default value `DEFAULT` of the scan mode. 
The manual `CoreOptions.setDefaultValues` helps users to set the default value 
of the certain configuration. What default value may cause wrong error 
information? 

`


was (Author: nicholasjiang):
[~zjureel], IMO, it's needed for users to reduce the user-defined 
configurations, for example, setting `scan.timestamp-millis` means that the 
scan mode is FROM_TIMESTAMP. The manual `CoreOptions.setDefaultValues` helps 
users to set the default value of the certain configuration. What default value 
may cause wrong error information? 

`

> Remove set default value manually for table options
> ---
>
> Key: FLINK-30590
> URL: https://issues.apache.org/jira/browse/FLINK-30590
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Priority: Major
>
> Remove set default value manually in `CoreOptions.setDefaultValues` which may 
> cause wrong error information and it's not needed anymore



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30590) Remove set default value manually for table options

2023-01-06 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655414#comment-17655414
 ] 

Nicholas Jiang commented on FLINK-30590:


[~zjureel], IMO, it's needed for users to reduce the user-defined 
configurations, for example, setting `scan.timestamp-millis` means that the 
scan mode is FROM_TIMESTAMP. The manual `CoreOptions.setDefaultValues` helps 
users to set the default value of the certain configuration. What default value 
may cause wrong error information? 

`

> Remove set default value manually for table options
> ---
>
> Key: FLINK-30590
> URL: https://issues.apache.org/jira/browse/FLINK-30590
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: Shammon
>Priority: Major
>
> Remove set default value manually in `CoreOptions.setDefaultValues` which may 
> cause wrong error information and it's not needed anymore



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29756) Support materialized column to improve query performance for complex types

2022-10-25 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-29756:
--

 Summary: Support materialized column to improve query performance 
for complex types
 Key: FLINK-29756
 URL: https://issues.apache.org/jira/browse/FLINK-29756
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.3.0


In the world of data warehouse, it is very common to use one or more columns 
from a complex type such as a map, or to put many subfields into it. These 
operations can greatly affect query performance because:
 # These operations are very wasteful IO. For example, if we have a field type 
of Map, which contains dozens of subfields, we need to read the entire column 
when reading this column. And Spark will traverse the entire map to get the 
value of the target key.
 # Cannot take advantage of vectorized reads when reading nested type columns.
 # Filter pushdown cannot be used when reading nested columns.

It is necessary to introduce the materialized column feature in Flink Table 
Store, which transparently solves the above problems of arbitrary columnar 
storage (not just Parquet).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29226) Throw exception for streaming insert overwrite

2022-09-08 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17602059#comment-17602059
 ] 

Nicholas Jiang commented on FLINK-29226:


[~lzljs3620320], could you please assign this ticket to me?

> Throw exception for streaming insert overwrite
> --
>
> Key: FLINK-29226
> URL: https://issues.apache.org/jira/browse/FLINK-29226
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.3.0, table-store-0.2.1
>
>
> Currently, table store dose not support streaming insert overwrite, we should 
> throw exception for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-26938) HybridSource recovery from savepoint fails When flink parallelism is greater than the number of Kafka partitions

2022-08-14 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang resolved FLINK-26938.

Fix Version/s: 1.16.0
   Resolution: Fixed

> HybridSource recovery from savepoint fails When flink parallelism is greater 
> than the number of Kafka partitions
> 
>
> Key: FLINK-26938
> URL: https://issues.apache.org/jira/browse/FLINK-26938
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.0
> Environment: Flink 1.14.0
>Reporter: wenbao
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.16.0
>
> Attachments: HybridSourceTest.java, image-2022-03-31-13-36-45-686.png
>
>
> HybridSource recovery from savepoint fails When flink parallelism is greater 
> than the number of Kafka partitions
> First test
> Flink job before savePoint
>     flink parallelism =16
>     kafka partition=3
> Flink after savePoint
> case 1:
>     flink parallelism =16
>     kafka partition=3
> HybridSource recovery from savepoint fails 
> !image-2022-03-31-13-36-45-686.png!
> case 2:
>     flink parallelism =3
>     kafka partition=3
> HybridSource recovery from savepoint  successful
> case 3:
>     flink parallelism =8
>     kafka partition=3
> HybridSource recovery from savepoint fails  the same NullPointerException: 
> Source for index=0 not available
> case 4:
>     flink parallelism =4
>     kafka partition=3
> HybridSource recovery from savepoint fails  the same NullPointerException: 
> Source for index=0 not available
> case 5:
>     flink parallelism =1
>     kafka partition=3
> HybridSource recovery from savepoint  successful
> Second test
> Flink job before savePoint
>     flink parallelism =3
>     kafka partition=3
> Flink after savePoint
> case 1:
>     flink parallelism =3
>     kafka partition=3
> HybridSource recovery from savepoint  successful
> case 2:
>     flink parallelism =1
>     kafka partition=3
> HybridSource recovery from savepoint  successful
> case 3:
>     flink parallelism =4
>     kafka partition=3
> HybridSource recovery from savepoint fails  the same NullPointerException: 
> Source for index=0 not available
> Specific code see the attached test code HybridSourceTest
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28817) NullPointerException in HybridSource when restoring from checkpoint

2022-08-12 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578893#comment-17578893
 ] 

Nicholas Jiang commented on FLINK-28817:


[~thw], IMO, this pull request could also fixed the bug of FLINK-26938 , right?

cc [~zhongqishang] 

> NullPointerException in HybridSource when restoring from checkpoint
> ---
>
> Key: FLINK-28817
> URL: https://issues.apache.org/jira/browse/FLINK-28817
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.14.4, 1.15.1
>Reporter: Michael
>Priority: Major
>  Labels: pull-request-available
> Attachments: Preconditions-checkNotNull-error.zip, 
> bf-29-JM-err-analysis.log
>
>
> Scenario:
>  # CheckpointCoordinator - Completed checkpoint 14 for job 
> 
>  # HybridSource successfully completed processing a few SourceFactories, that 
> reads from s3
>  # HybridSourceSplitEnumerator.switchEnumerator failed with 
> com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed 
> out. This is intermittent error, it is usually fixed, when Flink recover from 
> checkpoint & repeat the operation.
>  # Flink starts recovering from checkpoint, 
>  # HybridSourceSplitEnumerator receives 
> SourceReaderFinishedEvent\{sourceIndex=-1}
>  # Processing this event cause 
> 2022/08/08 08:39:34.862 ERROR o.a.f.r.s.c.SourceCoordinator - Uncaught 
> exception in the SplitEnumerator for Source Source: hybrid-source while 
> handling operator event SourceEventWrapper[SourceReaderFinishedEvent
> {sourceIndex=-1}
> ] from subtask 6. Triggering job failover.
> java.lang.NullPointerException: Source for index=0 is not available from 
> sources: \{788=org.apache.flink.connector.file.src.SppFileSource@5a3803f3}
> at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> at 
> org.apache.flink.connector.base.source.hybridspp.SwitchedSources.sourceOf(SwitchedSources.java:36)
> at 
> org.apache.flink.connector.base.source.hybridspp.HybridSourceSplitEnumerator.sendSwitchSourceEvent(HybridSourceSplitEnumerator.java:152)
> at 
> org.apache.flink.connector.base.source.hybridspp.HybridSourceSplitEnumerator.handleSourceEvent(HybridSourceSplitEnumerator.java:226)
> ...
> I'm running my version of the Hybrid Sources with additional logging, so line 
> numbers & some names could be different from Flink Github.
> My Observation: the problem is intermittent, sometimes it works ok, i.e. 
> SourceReaderFinishedEvent comes with correct sourceIndex. As I see from my 
> log, it happens if my SourceFactory.create()  is executed BEFORE 
> HybridSourceSplitEnumerator - handleSourceEvent 
> SourceReaderFinishedEvent\{sourceIndex=-1}.
> If  HybridSourceSplitEnumerator - handleSourceEvent is executed before my 
> SourceFactory.create(), then sourceIndex=-1 in SourceReaderFinishedEvent
> Preconditions-checkNotNull-error log from JobMgr is attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28930) Add "What is Flink Table Store?" link to flink website

2022-08-11 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28930:
--

 Summary: Add "What is Flink Table Store?" link to flink website
 Key: FLINK-28930
 URL: https://issues.apache.org/jira/browse/FLINK-28930
 Project: Flink
  Issue Type: Improvement
  Components: Project Website, Table Store
Reporter: Nicholas Jiang


Similar to statefun and ml projects we should also add a "What is Flink Table 
Store?" link to the menu pointing to the doc site. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28903) flink-table-store-hive-catalog could not shade hive-shims-0.23

2022-08-11 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28903:
---
Description: 
flink-table-store-hive-catalog could not shade hive-shims-0.23 because 
artifactSet doesn't include hive-shims-0.23 and the hive-shims-0.23 isn't 
specifically included with filters. The exception is as follows for setting :
{code:java}
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1708)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:380)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:80) 
~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:93)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:62)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:428)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1356)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    ... 10 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:1.8.0_181]
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_181]
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_181]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_181]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1706)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 

[jira] [Updated] (FLINK-28903) flink-table-store-hive-catalog could not shade hive-shims-0.23

2022-08-11 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28903:
---
Description: 
flink-table-store-hive-catalog could not shade hive-shims-0.23 because 
artifactSet doesn't include hive-shims-0.23 and the hive-shims-0.23 isn't 
specifically included with filters. The exception is as follows for setting 
hive.metastore.use.SSL or hive.metastore.sasl.enabled is true:
{code:java}
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1708)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:380)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:80) 
~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:93)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:62)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:428)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1356)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    ... 10 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:1.8.0_181]
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_181]
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_181]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_181]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1706)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 

[jira] [Commented] (FLINK-28903) flink-table-store-hive-catalog could not shade hive-shims-0.23

2022-08-11 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578291#comment-17578291
 ] 

Nicholas Jiang commented on FLINK-28903:


[~lzljs3620320] , when the value of hive.metastore.use.SSL or 
hive.metastore.sasl.enabled is true, there is above exception for Hive 2.x 
including Hive 2.3.

> flink-table-store-hive-catalog could not shade hive-shims-0.23
> --
>
> Key: FLINK-28903
> URL: https://issues.apache.org/jira/browse/FLINK-28903
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> flink-table-store-hive-catalog could not shade hive-shims-0.23 because 
> artifactSet doesn't include hive-shims-0.23 and the minimizeJar is set to 
> true. The exception is as follows:
> {code:java}
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>     at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1708)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:380)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:80) 
> ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:93)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:62)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
>  ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
>     at 
> org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:428)
>  ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1356)
>  ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:)
>  ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:209)
>  ~[flink-sql-client-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
>  ~[flink-sql-client-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:209)
>  ~[flink-sql-client-1.15.1.jar:1.15.1]
>     ... 10 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> ~[?:1.8.0_181]
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  ~[?:1.8.0_181]
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[?:1.8.0_181]
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> ~[?:1.8.0_181]
>     at 
> 

[jira] [Created] (FLINK-28903) flink-table-store-hive-catalog could not shade hive-shims-0.23

2022-08-10 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28903:
--

 Summary: flink-table-store-hive-catalog could not shade 
hive-shims-0.23
 Key: FLINK-28903
 URL: https://issues.apache.org/jira/browse/FLINK-28903
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.3.0


flink-table-store-hive-catalog could not shade hive-shims-0.23 because 
artifactSet doesn't include hive-shims-0.23 and the minimizeJar is set to true. 
The exception is as follows:
{code:java}
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1708)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.createClient(HiveCatalog.java:380)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalog.(HiveCatalog.java:80) 
~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.hive.HiveCatalogFactory.create(HiveCatalogFactory.java:51)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.file.catalog.CatalogFactory.createCatalog(CatalogFactory.java:93)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:62)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:57)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.connector.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:31)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:428)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1356)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:)
 ~[flink-table-api-java-uber-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:209)
 ~[flink-sql-client-1.15.1.jar:1.15.1]
    ... 10 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:1.8.0_181]
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_181]
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_181]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_181]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1706)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 
org.apache.flink.table.store.shaded.org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
 ~[flink-table-store-dist-0.3-SNAPSHOT.jar:0.3-SNAPSHOT]
    at 

[jira] [Commented] (FLINK-28574) Bump the fabric8 kubernetes-client to 6.0.0

2022-08-09 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577376#comment-17577376
 ] 

Nicholas Jiang commented on FLINK-28574:


[~ConradJam], the Java Operator SDK 3.1.1 has been released and bumped the 
fabric8 kubernetes-client to 5.12.3. I have already bumped the 
kubernetes-client. This ticket could be closed after above PR is merged.

cc [~gyfora] 

> Bump the fabric8 kubernetes-client to 6.0.0
> ---
>
> Key: FLINK-28574
> URL: https://issues.apache.org/jira/browse/FLINK-28574
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: ConradJam
>Priority: Major
>  Labels: pull-request-available
>
> fabric8 kubernetes-client now is release to 
> [6.0.0|https://github.com/fabric8io/kubernetes-client/releases/tag/v6.0.0] , 
> Later we can upgrade this version and remove the deprecated API usage
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28754) Document that Java 8 is required to build table store

2022-08-05 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28754:
---
Summary: Document that Java 8 is required to build table store  (was: 
document that Java 8 is required to build table store)

> Document that Java 8 is required to build table store
> -
>
> Key: FLINK-28754
> URL: https://issues.apache.org/jira/browse/FLINK-28754
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table Store
>Reporter: David Anderson
>Priority: Major
>
> The table store can not be built with Java 11, but the "build from source" 
> instructions don't mention this restriction.
> https://nightlies.apache.org/flink/flink-table-store-docs-master/docs/engines/build/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28840) Introduce roadmap document of Flink Table Store

2022-08-05 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17575833#comment-17575833
 ] 

Nicholas Jiang commented on FLINK-28840:


[~lzljs3620320] , could you please help to assign this ticket to me?

> Introduce roadmap document of Flink Table Store
> ---
>
> Key: FLINK-28840
> URL: https://issues.apache.org/jira/browse/FLINK-28840
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Nicholas Jiang
>Priority: Minor
> Fix For: table-store-0.3.0
>
>
> The Flink Table Store subproject needs its own roadmap document to present an 
> overview of the general direction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28840) Introduce roadmap document of Flink Table Store

2022-08-05 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28840:
---
Summary: Introduce roadmap document of Flink Table Store  (was: Introduce 
Roadmap document of Flink Table Store)

> Introduce roadmap document of Flink Table Store
> ---
>
> Key: FLINK-28840
> URL: https://issues.apache.org/jira/browse/FLINK-28840
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Nicholas Jiang
>Priority: Minor
> Fix For: table-store-0.3.0
>
>
> The Flink Table Store subproject needs its own roadmap document to present an 
> overview of the general direction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28840) Introduce Roadmap document of Flink Table Store

2022-08-05 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28840:
---
Component/s: Table Store

> Introduce Roadmap document of Flink Table Store
> ---
>
> Key: FLINK-28840
> URL: https://issues.apache.org/jira/browse/FLINK-28840
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Nicholas Jiang
>Priority: Minor
> Fix For: table-store-0.3.0
>
>
> The Flink Table Store subproject needs its own roadmap document to present an 
> overview of the general direction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28840) Introduce Roadmap document of Flink Table Store

2022-08-05 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28840:
--

 Summary: Introduce Roadmap document of Flink Table Store
 Key: FLINK-28840
 URL: https://issues.apache.org/jira/browse/FLINK-28840
 Project: Flink
  Issue Type: Improvement
Reporter: Nicholas Jiang
 Fix For: table-store-0.3.0


The Flink Table Store subproject needs its own roadmap document to present an 
overview of the general direction.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28594) Add metrics for FlinkService

2022-08-04 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17575237#comment-17575237
 ] 

Nicholas Jiang commented on FLINK-28594:


[~matyas] , what are the most of the blocking operations? I'm adding the 
Histogram metrics for the operations like triggerSavepoint etc, except the 
querying operations like listJobs.

> Add metrics for FlinkService
> 
>
> Key: FLINK-28594
> URL: https://issues.apache.org/jira/browse/FLINK-28594
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.2.0
>
>
> We would need some metrics for the `FlinkService` to be able to tell how long 
> does it take to perform most of the blocking operations we have in this 
> service



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28727) Flink Source supports SupportsLimitPushDown

2022-07-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17572716#comment-17572716
 ] 

Nicholas Jiang commented on FLINK-28727:


[~lzljs3620320], could you please assign this ticket to me? I will push a PR 
for this support.

> Flink Source supports SupportsLimitPushDown
> ---
>
> Key: FLINK-28727
> URL: https://issues.apache.org/jira/browse/FLINK-28727
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Minor
> Fix For: table-store-0.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28594) Add metrics for FlinkService

2022-07-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17572253#comment-17572253
 ] 

Nicholas Jiang commented on FLINK-28594:


[~matyas] , are you working for this?

> Add metrics for FlinkService
> 
>
> Key: FLINK-28594
> URL: https://issues.apache.org/jira/browse/FLINK-28594
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.2.0
>
>
> We would need some metrics for the `FlinkService` to be able to tell how long 
> does it take to perform most of the blocking operations we have in this 
> service



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28705) Update copyright year to 2014-2022 in NOTICE files

2022-07-27 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17571828#comment-17571828
 ] 

Nicholas Jiang commented on FLINK-28705:


[~lzljs3620320], please help to assign this ticket to me and close this ticket.

> Update copyright year to 2014-2022 in NOTICE files
> --
>
> Key: FLINK-28705
> URL: https://issues.apache.org/jira/browse/FLINK-28705
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Copyright year of the NOTICE files in Flink Table Store should be '2014-2022'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28705) Update copyright year to 2014-2022 in NOTICE files

2022-07-27 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28705:
---
Description: Copyright year of the NOTICE files in Flink Table Store should 
be '2014-2022'.  (was: Copyright year of the NOTICE file in Flink Table Store 
should be '2014-2022'.)

> Update copyright year to 2014-2022 in NOTICE files
> --
>
> Key: FLINK-28705
> URL: https://issues.apache.org/jira/browse/FLINK-28705
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Nicholas Jiang
>Priority: Minor
> Fix For: table-store-0.3.0
>
>
> Copyright year of the NOTICE files in Flink Table Store should be '2014-2022'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28705) Update copyright year to 2014-2022 in NOTICE files

2022-07-27 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28705:
--

 Summary: Update copyright year to 2014-2022 in NOTICE files
 Key: FLINK-28705
 URL: https://issues.apache.org/jira/browse/FLINK-28705
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.3.0


Copyright year of the NOTICE file in Flink Table Store should be '2014-2022'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28086) Table Store Catalog supports partition methods

2022-07-22 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17569875#comment-17569875
 ] 

Nicholas Jiang commented on FLINK-28086:


[~lzljs3620320], could you please assign this ticket to me? I'm working for 
supporting partition methods.

> Table Store Catalog supports partition methods
> --
>
> Key: FLINK-28086
> URL: https://issues.apache.org/jira/browse/FLINK-28086
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Minor
> Fix For: table-store-0.2.0
>
>
> Table Store Catalog can support:
>  * listPartitions
>  * listPartitionsByFilter
>  * getPartition
>  * partitionExists
>  * dropPartition



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2

2022-07-17 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567768#comment-17567768
 ] 

Nicholas Jiang commented on FLINK-28578:


[~lzljs3620320], please help to assign this ticket to me.

> Upgrade Spark version of flink-table-store-spark to 3.2.2
> -
>
> Key: FLINK-28578
> URL: https://issues.apache.org/jira/browse/FLINK-28578
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark 
> UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or 
> 3.3.0 or later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2

2022-07-17 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28578:
---
Summary: Upgrade Spark version of flink-table-store-spark to 3.2.2  (was: 
Upgrade Spark version of flink-table-store-spark to 3.1.3, 3.2.2 or 3.3.0 or 
later)

> Upgrade Spark version of flink-table-store-spark to 3.2.2
> -
>
> Key: FLINK-28578
> URL: https://issues.apache.org/jira/browse/FLINK-28578
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Nicholas Jiang
>Priority: Minor
> Fix For: table-store-0.2.0
>
>
> CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark 
> UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or 
> 3.3.0 or later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.1.3, 3.2.2 or 3.3.0 or later

2022-07-17 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28578:
--

 Summary: Upgrade Spark version of flink-table-store-spark to 
3.1.3, 3.2.2 or 3.3.0 or later
 Key: FLINK-28578
 URL: https://issues.apache.org/jira/browse/FLINK-28578
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.2.0


CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark 
UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or 3.3.0 
or later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28560) Support Spark 3.3 profile for SparkSource

2022-07-15 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567122#comment-17567122
 ] 

Nicholas Jiang commented on FLINK-28560:


[~lzljs3620320], please help to assign this to me and close this ticket.

> Support Spark 3.3 profile for SparkSource
> -
>
> Key: FLINK-28560
> URL: https://issues.apache.org/jira/browse/FLINK-28560
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> flink-table-store-spark module support Spark 3.0~3.2 profile, which has 
> published the Spark 3.3.0 version. Spark 3.3 profile can be introduced for 
> SparkSource to follow the release version of Spark 3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28560) Support Spark 3.3 profile for SparkSource

2022-07-14 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-28560:
--

 Summary: Support Spark 3.3 profile for SparkSource
 Key: FLINK-28560
 URL: https://issues.apache.org/jira/browse/FLINK-28560
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Nicholas Jiang
 Fix For: table-store-0.2.0


flink-table-store-spark module support Spark 3.0~3.2 profile, which has 
published the Spark 3.3.0 version. Spark 3.3 profile can be introduced for 
SparkSource to follow the release version of Spark 3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28535) Support create namespace/table for SparkCatalog

2022-07-14 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-28535:
---
Summary: Support create namespace/table for SparkCatalog  (was: Support 
create database/table for SparkCatalog)

> Support create namespace/table for SparkCatalog
> ---
>
> Key: FLINK-28535
> URL: https://issues.apache.org/jira/browse/FLINK-28535
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28480) Forward timeControllerExecution time as histogram for JOSDK Metrics interface

2022-07-12 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565330#comment-17565330
 ] 

Nicholas Jiang commented on FLINK-28480:


[~gyfora], please help to assign this ticket to me. I have pushed a PR for this 
ticket.

> Forward timeControllerExecution time as histogram for JOSDK Metrics interface
> -
>
> Key: FLINK-28480
> URL: https://issues.apache.org/jira/browse/FLINK-28480
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.1.0
>
>
> Currently the JOSDK metrics forwarder logic doesn't implement the 
> timeControllerExecution function.  We should implement it with the following 
> logic.
> 1. Measure execution time for successful failed executions
> 2. Based on the name of the ControllerExectution (reconcile/cleanup) and 
> controller name, track the following histogram metrics metrics
> JOSDK.\{ControllerExecution.controllerName}.\{ControllerExecution.name}.\{ControllerExecution.successTypeName}/failed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-08 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564329#comment-17564329
 ] 

Nicholas Jiang edited comment on FLINK-28364 at 7/8/22 3:35 PM:


[~bgeng777], the users build a custom image which has Python and PyFlink 
prepared by following the document 
[using-flink-python-on-docker|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#using-flink-python-on-docker],
 no need to more document about building PyFlink image. Hence you could firstly 
publish the official PyFlink image, then provide the PyFlink example.
[~wu3396], WDYT?


was (Author: nicholasjiang):
[~bgeng777], the users build a custom image which has Python and PyFlink 
prepared by following the document 
[using-flink-python-on-docker|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#using-flink-python-on-docker],
 no need to more document about building PyFlink image. Hence you could firstly 
publish the official PyFlink image, then provide the PyFlink example.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-08 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564329#comment-17564329
 ] 

Nicholas Jiang commented on FLINK-28364:


[~bgeng777], the users build a custom image which has Python and PyFlink 
prepared by following the document 
[using-flink-python-on-docker|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#using-flink-python-on-docker],
 no need to more document about building PyFlink image. Hence you could firstly 
publish the official PyFlink image, then provide the PyFlink example.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-07 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564079#comment-17564079
 ] 

Nicholas Jiang commented on FLINK-28364:


[~dianfu][~gyfora][~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-07 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564079#comment-17564079
 ] 

Nicholas Jiang edited comment on FLINK-28364 at 7/8/22 4:21 AM:


[~dianfu], [~gyfora], [~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.


was (Author: nicholasjiang):
[~dianfu][~gyfora][~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28100) rocketmq-flink checkpoint

2022-06-16 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555355#comment-17555355
 ] 

Nicholas Jiang commented on FLINK-28100:


[~SOD_DOB], I'm the RocketMQ connector owner. What's the problem did you occur 
for checkpoint? 

> rocketmq-flink checkpoint
> -
>
> Key: FLINK-28100
> URL: https://issues.apache.org/jira/browse/FLINK-28100
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / State Backends
>Affects Versions: 1.13.2
> Environment: flink version: flink-1.13.2
> rocketmq version: 4.2.0
>Reporter: yantao
>Priority: Major
>  Labels: RocketMQ, checkpoint
>
> When I using [ROCKETMQ-FLINK|https://github.com/apache/rocketmq-flink], but I 
> don't know  How do I set up stateBackend and save checkpoints in HDFS?
> Is that not supported?
> can you help me?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27914) Integrate JOSDK metrics with Flink Metrics reporter

2022-06-10 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552629#comment-17552629
 ] 

Nicholas Jiang commented on FLINK-27914:


[~gyfora], I'm working for the implementation of the 
'io.javaoperatorsdk.operator.api.monitoring.Metrics' interface and the 
integration with the operator. Could you please assign this ticket to me?

> Integrate JOSDK metrics with Flink Metrics reporter
> ---
>
> Key: FLINK-27914
> URL: https://issues.apache.org/jira/browse/FLINK-27914
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Minor
>  Labels: Starter
> Fix For: kubernetes-operator-1.1.0
>
>
> The Java Operator SDK comes with an internal metric interface that could be 
> implemented to forward metrics/measurements to the Flink metric registries. 
> We should investigate and implement this if possible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27259) Observer should not clear savepoint errors even though deployment is healthy

2022-06-09 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang closed FLINK-27259.
--
Resolution: Won't Fix

> Observer should not clear savepoint errors even though deployment is healthy
> 
>
> Key: FLINK-27259
> URL: https://issues.apache.org/jira/browse/FLINK-27259
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>
> Even though the deployment is healthy and job is running, triggering 
> savepoint still could fail with errors. See FLINK-27257 for more information. 
> These errors should not be cleared in {{{}AbstractDeploymentObserver{}}}.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27259) Observer should not clear savepoint errors even though deployment is healthy

2022-06-09 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552487#comment-17552487
 ] 

Nicholas Jiang commented on FLINK-27259:


[~wangyang0918], after FLINK-27257 was closed, IMO this ticket could be closed.
cc [~gyfora]

> Observer should not clear savepoint errors even though deployment is healthy
> 
>
> Key: FLINK-27259
> URL: https://issues.apache.org/jira/browse/FLINK-27259
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>
> Even though the deployment is healthy and job is running, triggering 
> savepoint still could fail with errors. See FLINK-27257 for more information. 
> These errors should not be cleared in {{{}AbstractDeploymentObserver{}}}.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27913) Remove savepointHistoryMaxCount and savepointHistoryMaxAge from FlinkOperatorConfiguration

2022-06-06 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550432#comment-17550432
 ] 

Nicholas Jiang commented on FLINK-27913:


[~gyfora], I will work for this ticket. Please help to assign this ticket to 
me. Thanks.

> Remove savepointHistoryMaxCount and savepointHistoryMaxAge from 
> FlinkOperatorConfiguration
> --
>
> Key: FLINK-27913
> URL: https://issues.apache.org/jira/browse/FLINK-27913
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Critical
>  Labels: Starter
> Fix For: kubernetes-operator-1.1.0
>
>
> Currently savepointHistoryMaxCount and savepointHistoryMaxAge is part of the 
> FlinkOperatorConfiguration class which means that users cannot override this 
> from their user deployment.
> We should remove it and get it directly from the effective config. 
> We should however introduce a max allowed value for these configurations that 
> is configured on the FlinkOperatorConfiguration level so that users cannot 
> infinitely grow the status size and cause problems for the k8s api.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-06-01 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544720#comment-17544720
 ] 

Nicholas Jiang commented on FLINK-27257:


[~gyfora], I don't mind. I will take a look at the implementation of savepoint 
triggering/management.  Thanks.

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> 2022-04-15 02:38:56,693 o.a.f.k.o.o.SavepointObserver  
> [ERROR][default/flink-example-statemachine] Checkpoint triggering task 
> Source: Custom Source (1/2) of job  is not 
> being executed at the moment. Aborting checkpoint. Failure reason: Not all 
> required tasks are currently running. {code}
> How to reproduce?
> Update arbitrary fields(e.g. parallelism) along with 
> {{{}savepointTriggerNonce{}}}.
>  
> The root cause might be the running state return by 
> {{ClusterClient#listJobs()}} does not mean all the tasks are running.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27668) Document dynamic operator configuration

2022-05-27 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543414#comment-17543414
 ] 

Nicholas Jiang commented on FLINK-27668:


[~gyfora], could you please assign this ticket to me? I have pushed a pull 
request for document.

> Document dynamic operator configuration
> ---
>
> Key: FLINK-27668
> URL: https://issues.apache.org/jira/browse/FLINK-27668
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> The Kubernetes operator now support dynamic config changes through the 
> operator configmap.
> This feature is not documented properly and it should be added to the 
> operations/configuration page



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27759) Rethink how to get the git commit id for docker image in Flink Kubernetes operator

2022-05-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541784#comment-17541784
 ] 

Nicholas Jiang commented on FLINK-27759:


[~wangyang0918], I will investigate the better way to get the git commit id for 
docker image in Flink Kubernetes operator.

> Rethink how to get the git commit id for docker image in Flink Kubernetes 
> operator
> --
>
> Key: FLINK-27759
> URL: https://issues.apache.org/jira/browse/FLINK-27759
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>
> Follow the discussion in the PR[1][2], we need to rethink how the get the git 
> commit id properly. Currently, we rely on the .git directory. And it is a 
> problem when building image from source release.
>  
> [1]. [https://github.com/apache/flink-kubernetes-operator/pull/243]
> [2]. https://github.com/apache/flink-kubernetes-operator/pull/241



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541412#comment-17541412
 ] 

Nicholas Jiang edited comment on FLINK-27746 at 5/24/22 10:49 AM:
--

[~tison], it's my mistake that I didn't comment on this ticket and thanks for 
[~wangyang0918] to explain. [~wangyang0918], shipping the .git is better 
solution at present and you need to update the current release process.


was (Author: nicholasjiang):
[~tison], it's my mistake that I didn't comment on this ticket and thanks for 
[~wangyang0918] to explain. [~wangyang0918], shipping the .git is better 
solution at present.

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541412#comment-17541412
 ] 

Nicholas Jiang edited comment on FLINK-27746 at 5/24/22 10:48 AM:
--

[~tison], it's my mistake that I didn't comment on this ticket and thanks for 
[~wangyang0918] to explain. [~wangyang0918], shipping the .git is better 
solution at present.


was (Author: nicholasjiang):
[~tison], it's my mistake that I didn't comment on this ticket and thanks for 
[~wangyang0918] to explain. "Setup another approach to generate git properties 
before release and skip the related phase when building with release 
artifacts." makes sense to me.

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541412#comment-17541412
 ] 

Nicholas Jiang commented on FLINK-27746:


[~tison], it's my mistake that I didn't comment on this ticket and thanks for 
[~wangyang0918] to explain. "Setup another approach to generate git properties 
before release and skip the related phase when building with release 
artifacts." makes sense to me.

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-24 Thread Nicholas Jiang (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27746 ]


Nicholas Jiang deleted comment on FLINK-27746:


was (Author: nicholasjiang):
[~tison], if we simply remove "COPY .git", the git info could not be logged in 
the environment info of Flink job.

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541409#comment-17541409
 ] 

Nicholas Jiang commented on FLINK-27746:


[~tison], if we simply remove "COPY .git", the git info could not be logged in 
the environment info of Flink job.

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-17 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538539#comment-17538539
 ] 

Nicholas Jiang edited comment on FLINK-27257 at 5/18/22 5:21 AM:
-

I have offline discussed with [~wangyang0918]. It's hard to improve the 
isJobRunning implementation because the returned JobStatusMessages of 
ClusterClient#listJobs() don't contain the tasksPerState of the JobDetail, 
which cause that there is no way to judge whether all the ExecutionState of 
tasks are running in the job and for batch tasks how to judge whether the job 
is really running.

The current idea is that if the error is found to be not all required tasks are 
currently running, then continue to trigger savepoint in the next 
reconciliation until it is successfully triggered.

[~gyfora], [~wangyang0918] WDYT?


was (Author: nicholasjiang):
I have offline discussed with [~wangyang0918]. It's hard to improve the 
isJobRunning implementation because the returned JobStatusMessages of 
ClusterClient#listJobs() don't contain the tasksPerState of the JobDetail, 
which cause that there is no way to judge whether all the ExecutionState of 
tasks are running in the job and for batch tasks how to judge whether the job 
is really running.

The current idea is that if the error is found to be not all required tasks are 
currently running, then continue to trigger savepoint in the next 
reconciliation until it is successfully triggered.

[~gyfora][~wangyang0918] WDYT?

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at 

[jira] [Comment Edited] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-17 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538539#comment-17538539
 ] 

Nicholas Jiang edited comment on FLINK-27257 at 5/18/22 1:44 AM:
-

I have offline discussed with [~wangyang0918]. It's hard to improve the 
isJobRunning implementation because the returned JobStatusMessages of 
ClusterClient#listJobs() don't contain the tasksPerState of the JobDetail, 
which cause that there is no way to judge whether all the ExecutionState of 
tasks are running in the job and for batch tasks how to judge whether the job 
is really running.

The current idea is that if the error is found to be not all required tasks are 
currently running, then continue to trigger savepoint in the next 
reconciliation until it is successfully triggered.

[~gyfora][~wangyang0918] WDYT?


was (Author: nicholasjiang):
I have offline discussed with [~wangyang0918]. It's hard to improve the 
isJobRunning implementation because the returned JobStatusMessages of 
ClusterClient#listJobs() don't contain the tasksPerState of the JobDetail, 
which cause that there is no way to judge whether all the ExecutionState of 
tasks are running in the job and for batch tasks how to judge whether the job 
is really running.

The current idea is that if the error is found to be Not all required tasks are 
currently running, then continue to trigger savepoint in the next 
reconciliation until it is successfully triggered.

[~gyfora][~wangyang0918] WDYT?

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at 

[jira] [Commented] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-17 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538539#comment-17538539
 ] 

Nicholas Jiang commented on FLINK-27257:


I have offline discussed with [~wangyang0918]. It's hard to improve the 
isJobRunning implementation because the returned JobStatusMessages of 
ClusterClient#listJobs() don't contain the tasksPerState of the JobDetail, 
which cause that there is no way to judge whether all the ExecutionState of 
tasks are running in the job and for batch tasks how to judge whether the job 
is really running.

The current idea is that if the error is found to be Not all required tasks are 
currently running, then continue to trigger savepoint in the next 
reconciliation until it is successfully triggered.

[~gyfora][~wangyang0918] WDYT?

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> 2022-04-15 02:38:56,693 o.a.f.k.o.o.SavepointObserver  
> [ERROR][default/flink-example-statemachine] Checkpoint triggering task 
> Source: Custom Source (1/2) of job  is not 
> being executed at the moment. Aborting checkpoint. Failure reason: Not all 
> required tasks are currently running. {code}
> How to reproduce?
> Update arbitrary fields(e.g. parallelism) along with 
> {{{}savepointTriggerNonce{}}}.
>  
> The root cause might be the running state return by 
> {{ClusterClient#listJobs()}} does not mean all the 

[jira] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-17 Thread Nicholas Jiang (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27257 ]


Nicholas Jiang deleted comment on FLINK-27257:


was (Author: nicholasjiang):
[~gyfora], sorry for later reply. I will push a pull request today.

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> 2022-04-15 02:38:56,693 o.a.f.k.o.o.SavepointObserver  
> [ERROR][default/flink-example-statemachine] Checkpoint triggering task 
> Source: Custom Source (1/2) of job  is not 
> being executed at the moment. Aborting checkpoint. Failure reason: Not all 
> required tasks are currently running. {code}
> How to reproduce?
> Update arbitrary fields(e.g. parallelism) along with 
> {{{}savepointTriggerNonce{}}}.
>  
> The root cause might be the running state return by 
> {{ClusterClient#listJobs()}} does not mean all the tasks are running.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27257) Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-11 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17535850#comment-17535850
 ] 

Nicholas Jiang commented on FLINK-27257:


[~gyfora], sorry for later reply. I will push a pull request today.

> Flink kubernetes operator triggers savepoint failed because of not all tasks 
> running
> 
>
> Key: FLINK-27257
> URL: https://issues.apache.org/jira/browse/FLINK-27257
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> {code:java}
> 2022-04-15 02:38:56,551 o.a.f.k.o.s.FlinkService       [INFO 
> ][default/flink-example-statemachine] Fetching savepoint result with 
> triggerId: 182d7f176496856d7b33fe2f3767da18
> 2022-04-15 02:38:56,690 o.a.f.k.o.s.FlinkService       
> [ERROR][default/flink-example-statemachine] Savepoint error
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Source: Custom Source (1/2) of job 
>  is not being executed at the moment. 
> Aborting checkpoint. Failure reason: Not all required tasks are currently 
> running.
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:143)
>     at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:105)
>     at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
>     at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>     at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>     at akka.actor.Actor.aroundReceive(Actor.scala:537)
>     at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> 2022-04-15 02:38:56,693 o.a.f.k.o.o.SavepointObserver  
> [ERROR][default/flink-example-statemachine] Checkpoint triggering task 
> Source: Custom Source (1/2) of job  is not 
> being executed at the moment. Aborting checkpoint. Failure reason: Not all 
> required tasks are currently running. {code}
> How to reproduce?
> Update arbitrary fields(e.g. parallelism) along with 
> {{{}savepointTriggerNonce{}}}.
>  
> The root cause might be the running state return by 
> {{ClusterClient#listJobs()}} does not mean all the tasks are running.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27499) Bump base Flink version to 1.15.0

2022-05-10 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17534358#comment-17534358
 ] 

Nicholas Jiang commented on FLINK-27499:


[~gyfora], could you please assign this ticket to me? I have completed the 
ticket FLINK-27412 before.

> Bump base Flink version to 1.15.0
> -
>
> Key: FLINK-27499
> URL: https://issues.apache.org/jira/browse/FLINK-27499
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> With the 1.15.0 release out, we should bump our Flink dependency to 1.15 if 
> this does not interfere with the 1.14 compatibility.
> [~wangyang0918] what do you think?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27364) Support DESC EXTENDED partition statement for partitioned table

2022-04-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529674#comment-17529674
 ] 

Nicholas Jiang commented on FLINK-27364:


[~lsy], IMO, the pull request for FLINK-25177 has been commented by 
[~lzljs3620320] to supports all connectors. This should be added the connectors 
support on this pull request, right?

> Support DESC EXTENDED partition statement for partitioned table
> ---
>
> Key: FLINK-27364
> URL: https://issues.apache.org/jira/browse/FLINK-27364
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> DESCRIBE [EXTENDED] [db_name.]table_name \{ [PARTITION partition_spec]  | 
> [col_name ] };



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27412) Allow flinkVersion v1_13 in flink-kubernetes-operator

2022-04-26 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17528471#comment-17528471
 ] 

Nicholas Jiang commented on FLINK-27412:


[~wangyang0918], how many version would be supported in 
flink-kubernetes-operator?  IMO, the usual practice is to support the latest 
three versions.

> Allow flinkVersion v1_13 in flink-kubernetes-operator
> -
>
> Key: FLINK-27412
> URL: https://issues.apache.org/jira/browse/FLINK-27412
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>  Labels: starter
> Fix For: kubernetes-operator-1.0.0
>
>
> The core k8s related features:
>  * native k8s integration for session cluster, 1.10
>  * native k8s integration for application cluster, 1.11
>  * Flink K8s HA, 1.12
>  * pod template, 1.13
> So we could set required the minimum version to 1.13. This will allow more 
> users to have a try on flink-kubernetes-operator.
>  
> BTW, we need to update the e2e tests to cover all the supported versions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] (FLINK-27392) CEP Pattern supports definition of the maximum time gap between events

2022-04-25 Thread Nicholas Jiang (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27392 ]


Nicholas Jiang deleted comment on FLINK-27392:


was (Author: nicholasjiang):
[~dwysakowicz], WDYT?

> CEP Pattern supports definition of the maximum time gap between events
> --
>
> Key: FLINK-27392
> URL: https://issues.apache.org/jira/browse/FLINK-27392
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Nicholas Jiang
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.16.0
>
>
> At present, Pattern#withIn defines the maximum time interval in which a 
> matching pattern has to be completed in order to be considered valid. The 
> interval corresponds to the maximum time gap between first and the last 
> event. The maximum time gap between events is needed in the certain scenario, 
> for example, purchases a good within a maximum of 5 minutes after browsing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27401) CEP supports AggregateFunction in IterativeCondition

2022-04-25 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-27401:
--

 Summary: CEP supports AggregateFunction in IterativeCondition
 Key: FLINK-27401
 URL: https://issues.apache.org/jira/browse/FLINK-27401
 Project: Flink
  Issue Type: Improvement
  Components: Library / CEP
Affects Versions: 1.16.0
Reporter: Nicholas Jiang
 Fix For: 1.16.0


IterativeCondition only exposes the filter interface. For the aggregation 
operation in the condition, since the condition may be called multiple times in 
the NFA process, an event may cause the custom aggregation state in the 
condition to be updated multiple times, for example, filters goods with a total 
of more than 1000 orders in the past 10 minutes. AggregateFunction is 
introduced in IterativeCondition to initialize and maintain aggregation state.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27392) CEP Pattern supports definition of the maximum time gap between events

2022-04-25 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17527418#comment-17527418
 ] 

Nicholas Jiang commented on FLINK-27392:


[~dwysakowicz], WDYT?

> CEP Pattern supports definition of the maximum time gap between events
> --
>
> Key: FLINK-27392
> URL: https://issues.apache.org/jira/browse/FLINK-27392
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Nicholas Jiang
>Priority: Major
> Fix For: 1.16.0
>
>
> At present, Pattern#withIn defines the maximum time interval in which a 
> matching pattern has to be completed in order to be considered valid. The 
> interval corresponds to the maximum time gap between first and the last 
> event. The maximum time gap between events is needed in the certain scenario, 
> for example, purchases a good within a maximum of 5 minutes after browsing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27392) CEP Pattern supports definition of the maximum time gap between events

2022-04-25 Thread Nicholas Jiang (Jira)
Nicholas Jiang created FLINK-27392:
--

 Summary: CEP Pattern supports definition of the maximum time gap 
between events
 Key: FLINK-27392
 URL: https://issues.apache.org/jira/browse/FLINK-27392
 Project: Flink
  Issue Type: Improvement
  Components: Library / CEP
Affects Versions: 1.16.0
Reporter: Nicholas Jiang
 Fix For: 1.16.0


At present, Pattern#withIn defines the maximum time interval in which a 
matching pattern has to be completed in order to be considered valid. The 
interval corresponds to the maximum time gap between first and the last event. 
The maximum time gap between events is needed in the certain scenario, for 
example, purchases a good within a maximum of 5 minutes after browsing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] (FLINK-27183) Optimize CepOperator by using temporal state

2022-04-24 Thread Nicholas Jiang (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27183 ]


Nicholas Jiang deleted comment on FLINK-27183:


was (Author: nicholasjiang):
[~danderson], I mainly maintain Flink CEP. How much improvement can the use of 
temporal state bring to the performance of CepOperator on RocksDB? If it 
improves a lot, I want to try this ticket according to the public interfaces 
mentioned in the FLIP-220.

> Optimize CepOperator by using temporal state
> 
>
> Key: FLINK-27183
> URL: https://issues.apache.org/jira/browse/FLINK-27183
> Project: Flink
>  Issue Type: Sub-task
>  Components: Library / CEP
>Reporter: David Anderson
>Priority: Major
>
> The performance of CEP on RocksDB can be significantly improved by having it 
> use temporal state.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27364) Support DESC EXTENDED partition statement for partitioned table

2022-04-24 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17527281#comment-17527281
 ] 

Nicholas Jiang commented on FLINK-27364:


This issue is deplicated with the FLINK-25177.

> Support DESC EXTENDED partition statement for partitioned table
> ---
>
> Key: FLINK-27364
> URL: https://issues.apache.org/jira/browse/FLINK-27364
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> DESCRIBE [EXTENDED] [db_name.]table_name \{ [PARTITION partition_spec]  | 
> [col_name ] };



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27334) Support auto generate the doc for the KubernetesOperatorConfigOptions

2022-04-21 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17526085#comment-17526085
 ] 

Nicholas Jiang commented on FLINK-27334:


[~gyfora] [~aitozi], I have pushed a pull request for this. Please help to 
assign this ticket to me.

> Support auto generate the doc for the KubernetesOperatorConfigOptions
> -
>
> Key: FLINK-27334
> URL: https://issues.apache.org/jira/browse/FLINK-27334
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-26926) Allow users to force upgrade even if savepoint is in progress

2022-04-20 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17525083#comment-17525083
 ] 

Nicholas Jiang edited comment on FLINK-26926 at 4/20/22 3:51 PM:
-

[~gyfora], I have pushed a PR for this ticket. Please help to assign this 
ticket to me.


was (Author: nicholasjiang):
[~gyfora], I have push a PR for this ticket. Please help to assign this ticket 
to me.

> Allow users to force upgrade even if savepoint is in progress
> -
>
> Key: FLINK-26926
> URL: https://issues.apache.org/jira/browse/FLINK-26926
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Currently all upgrades (regardless of upgrade mode) are delayed as long as 
> there is a pending savepoint operation.
> We should allow users to override this and execute the upgrade (thus 
> potentially cancelling the savepoint) regardless of the savepoint status.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26926) Allow users to force upgrade even if savepoint is in progress

2022-04-20 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17525083#comment-17525083
 ] 

Nicholas Jiang commented on FLINK-26926:


[~gyfora], I have push a PR for this ticket. Please help to assign this ticket 
to me.

> Allow users to force upgrade even if savepoint is in progress
> -
>
> Key: FLINK-26926
> URL: https://issues.apache.org/jira/browse/FLINK-26926
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
> Fix For: kubernetes-operator-1.0.0
>
>
> Currently all upgrades (regardless of upgrade mode) are delayed as long as 
> there is a pending savepoint operation.
> We should allow users to override this and execute the upgrade (thus 
> potentially cancelling the savepoint) regardless of the savepoint status.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26926) Allow users to force upgrade even if savepoint is in progress

2022-04-19 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524352#comment-17524352
 ] 

Nicholas Jiang commented on FLINK-26926:


[~gyfora], IMO, the `kubernetes.operator.job.upgrade.ignore-pending-savepoint` 
option should be added after FLINK-27023. Right?

> Allow users to force upgrade even if savepoint is in progress
> -
>
> Key: FLINK-26926
> URL: https://issues.apache.org/jira/browse/FLINK-26926
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
> Fix For: kubernetes-operator-1.0.0
>
>
> Currently all upgrades (regardless of upgrade mode) are delayed as long as 
> there is a pending savepoint operation.
> We should allow users to override this and execute the upgrade (thus 
> potentially cancelling the savepoint) regardless of the savepoint status.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27023) Merge default flink and operator configuration settings for the operator

2022-04-19 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524348#comment-17524348
 ] 

Nicholas Jiang commented on FLINK-27023:


[~gyfora], I am also working on this issue, my other tickets are basically 
completed. I will take other ticket to work.

> Merge default flink and operator configuration settings for the operator
> 
>
> Key: FLINK-27023
> URL: https://issues.apache.org/jira/browse/FLINK-27023
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
> Fix For: kubernetes-operator-1.0.0
>
>
> Based on the mailing list discussion : 
> [https://lists.apache.org/thread/pnf2gk9dgqv3qrtszqbfcdxf32t2gr3x]
> As a first step we can combine the operators default flink and operator 
> config.
> This includes the following changes:
>  # Get rid of the DefaultConfig class and replace with a single Configuration 
> object containing the settings for both.
>  # Rename OperatorConfigOptions -> KubernetesOperatorConfigOptions
>  # Prefix all options with `kubernetes` to get kubernetes.operator.
>  # In the helm chart combine the operatorConfiguration and 
> flinkDefaultConfiguration into a common defaultConfigurationSection. We 
> should still keep the logging settings separately for the two somehow



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27183) Optimize CepOperator by using temporal state

2022-04-12 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521045#comment-17521045
 ] 

Nicholas Jiang commented on FLINK-27183:


[~danderson], I mainly maintain Flink CEP. How much improvement can the use of 
temporal state bring to the performance of CepOperator on RocksDB? If it 
improves a lot, I want to try this ticket according to the public interfaces 
mentioned in the FLIP-220.

> Optimize CepOperator by using temporal state
> 
>
> Key: FLINK-27183
> URL: https://issues.apache.org/jira/browse/FLINK-27183
> Project: Flink
>  Issue Type: Sub-task
>  Components: Library / CEP
>Reporter: David Anderson
>Priority: Major
>
> The performance of CEP on RocksDB can be significantly improved by having it 
> use temporal state.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27153) Allow optional last-state fallback for savepoint upgrade mode

2022-04-09 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519993#comment-17519993
 ] 

Nicholas Jiang commented on FLINK-27153:


[~gyfora], is the `kubernetes.operator.job.upgrade.last-state-fallback` 
configuration flag used to allow checkpoint based (last-state) recovery?

> Allow optional last-state fallback for savepoint upgrade mode
> -
>
> Key: FLINK-27153
> URL: https://issues.apache.org/jira/browse/FLINK-27153
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
> Fix For: kubernetes-operator-1.0.0
>
>
> In many cases users would prefer to take a savepoint if the job is healthy 
> before performing an upgrade but still allow checkpoint based (last-state) 
> recovery in case the savepoint fails or the job is generally in a bad state.
> We should add a configuration flag for this that the user can set in the 
> flinkConfiguration:
> `kubernetes.operator.job.upgrade.last-state-fallback`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   >