[jira] [Assigned] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)

2023-04-02 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-31674:
--

Assignee: Ryan Skraba

> [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)
> --
>
> Key: FLINK-31674
> URL: https://issues.apache.org/jira/browse/FLINK-31674
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Ryan Skraba
>Assignee: Ryan Skraba
>Priority: Major
>
> This is one sub-subtask related to the flink-table-planner migration 
> (FLINK-29541).
> While most of the JUnit migrations tasks are done by modules, a number of 
> abstract test classes in flink-table-planner have large hierarchies that 
> cross module boundaries.  This task is to migrate all of the tests that 
> depend on {{BatchAbstractTestBase}} to JUnit5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31690) The current key is not set for KeyedCoProcessOperator

2023-04-02 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-31690.
---
Fix Version/s: 1.16.2
   1.18.0
   1.17.1
   Resolution: Fixed

Fixed in:
- master via 6c1ffe544e31bb67df94175a559f2f40362795a4
- release-1.17 via e3d612e7e98bedde42c365df3f2ed2a2ca76aefa
- release-1.16 via 01cdaee25cdc41773a5c42638f4c5209373b5aa4

> The current key is not set for KeyedCoProcessOperator
> -
>
> Key: FLINK-31690
> URL: https://issues.apache.org/jira/browse/FLINK-31690
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
> See https://apache-flink.slack.com/archives/C03G7LJTS2G/p1680294701254239 for 
> more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dianfu closed pull request #22323: [FLINK-31690][python] Fix KeyedCoProcessFunction to set the current k…

2023-04-02 Thread via GitHub


dianfu closed pull request #22323: [FLINK-31690][python] Fix 
KeyedCoProcessFunction to set the current k…
URL: https://github.com/apache/flink/pull/22323


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31688) Broken links in docs for Azure Table Storage

2023-04-02 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-31688.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

master(1.18) via 6101ad313be78a658cc25487b9d3976252625feb.

> Broken links in docs for Azure Table Storage
> 
>
> Key: FLINK-31688
> URL: https://issues.apache.org/jira/browse/FLINK-31688
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> The doc page of 
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/dataset/formats/azure_table_storage/
>  has a broken links, we should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa merged pull request #22321: [FLINK-31688][docs] Fix the broken links in docs for Azure Table Storage

2023-04-02 Thread via GitHub


reswqa merged PR #22321:
URL: https://github.com/apache/flink/pull/22321


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22324: [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function.

2023-04-02 Thread via GitHub


flinkbot commented on PR #22324:
URL: https://github.com/apache/flink/pull/22324#issuecomment-1493647373

   
   ## CI report:
   
   * 8a534fe42ef69bc595fe9b87f1de924fdc173815 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31691:
---
Labels: pull-request-available  (was: )

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] liuyongvs opened a new pull request, #22324: [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function.

2023-04-02 Thread via GitHub


liuyongvs opened a new pull request, #22324:
URL: https://github.com/apache/flink/pull/22324

   - What is the purpose of the change
   This is an implementation of MAP_FROM_ENTRIES
   
   - Brief change log
   MAP_FROM_ENTRIES for Table API and SQL
   
   ```
   map_from_entries(map) - Returns a map created from an arrays of row with two 
fields. Note that the number of fields in a row array should be 2 and the key 
of a row array should not be null.
   
   Syntax:
   map_from_entries(array_of_rows)
   
   Arguments:
   array_of_rows: an arrays of row with two fields.
   
   Returns:
   
   Returns a map created from an arrays of row with two fields. Note that the 
number of fields in a row array should be 2 and the key of a row array should 
not be null.
   
   Returns null if the argument is null
   
   > SELECT map_from_entries(map[1, 'a', 2, 'b']);
[(1,"a"),(2,"b")]
   ```
   
   
   See also
   presto https://prestodb.io/docs/current/functions/map.html
   
   spark 
https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries
   
   - Verifying this change
   This change added tests in MapFunctionITCase.
   
   - Does this pull request potentially affect one of the following parts:
   Dependencies (does it add or upgrade a dependency): ( no)
   The public API, i.e., is any changed class annotated with @Public(Evolving): 
(yes )
   The serializers: (no)
   The runtime per-record code paths (performance sensitive): ( no)
   Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
   The S3 file system connector: ( no)
   - Documentation
   Does this pull request introduce a new feature? (yes)
   If yes, how is the feature documented? (docs)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31691:
-
Description: 
map_from_entries(map) - Returns a map created from an arrays of row with two 
fields. Note that the number of fields in a row array should be 2 and the key 
of a row array should not be null.

Syntax:
map_from_entries(array_of_rows)

Arguments:
array_of_rows: an arrays of row with two fields.

Returns:

Returns a map created from an arrays of row with two fields. Note that the 
number of fields in a row array should be 2 and the key of a row array should 
not be null.

Returns null if the argument is null
{code:sql}
> SELECT map_from_entries(map[1, 'a', 2, 'b']);
 [(1,"a"),(2,"b")]{code}
See also
presto [https://prestodb.io/docs/current/functions/map.html]

spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Samrat002 commented on pull request #22308: [FLINK-31518][Runtime / REST] Fix StandaloneHaServices#getClusterRestEndpointLeaderRetreiver to return correct rest port

2023-04-02 Thread via GitHub


Samrat002 commented on PR #22308:
URL: https://github.com/apache/flink/pull/22308#issuecomment-1493618741

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)
jackylau created FLINK-31691:


 Summary: Add MAP_FROM_ENTRIES supported in SQL & Table API
 Key: FLINK-31691
 URL: https://issues.apache.org/jira/browse/FLINK-31691
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707755#comment-17707755
 ] 

jackylau commented on FLINK-31602:
--

Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13524990]

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707755#comment-17707755
 ] 

jackylau edited comment on FLINK-31602 at 4/3/23 4:06 AM:
--

Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]


was (Author: jackylau):
Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13524990]

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau closed FLINK-31602.

Resolution: Fixed

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22323: [FLINK-31690][python] Fix KeyedCoProcessFunction to set the current k…

2023-04-02 Thread via GitHub


flinkbot commented on PR #22323:
URL: https://github.com/apache/flink/pull/22323#issuecomment-1493609975

   
   ## CI report:
   
   * e1e4b7235fbdcc5390e5db33ad6b6c69cf313b8d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31690) The current key is not set for KeyedCoProcessOperator

2023-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31690:
---
Labels: pull-request-available  (was: )

> The current key is not set for KeyedCoProcessOperator
> -
>
> Key: FLINK-31690
> URL: https://issues.apache.org/jira/browse/FLINK-31690
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
>
> See https://apache-flink.slack.com/archives/C03G7LJTS2G/p1680294701254239 for 
> more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dianfu opened a new pull request, #22323: [FLINK-31690][python] Fix KeyedCoProcessFunction to set the current k…

2023-04-02 Thread via GitHub


dianfu opened a new pull request, #22323:
URL: https://github.com/apache/flink/pull/22323

   …ey into the context
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22322: [BP-1.16][FLINK-31652][k8s] Handle the deleted event in case pod is deleted during the pending phase

2023-04-02 Thread via GitHub


flinkbot commented on PR #22322:
URL: https://github.com/apache/flink/pull/22322#issuecomment-1493594228

   
   ## CI report:
   
   * b84af1fbef1bb8d057a36a5ef90a585047dd89ce UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] Fanoid commented on a diff in pull request #227: [FLINK-31623] Fix DataStreamUtils#sample to uniform sampling.

2023-04-02 Thread via GitHub


Fanoid commented on code in PR #227:
URL: https://github.com/apache/flink-ml/pull/227#discussion_r1155451901


##
flink-ml-core/src/main/java/org/apache/flink/ml/common/datastream/DataStreamUtils.java:
##
@@ -280,6 +280,15 @@ public static  DataStream aggregate(
  * This method takes samples without replacement. If the number of 
elements in the stream is
  * smaller than expected number of samples, all elements will be included 
in the sample.
  *
+ * Technical details about this method: Firstly, the input elements are 
rebalanced. Then, in

Review Comment:
   Thanks for your comments. I've updated the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] chucheng92 commented on pull request #22179: [FLINK-31380][table] FLIP-297: Support enhanced show catalogs syntax

2023-04-02 Thread via GitHub


chucheng92 commented on PR #22179:
URL: https://github.com/apache/flink/pull/22179#issuecomment-1493592946

   @Aitozi Sorry to pin you, all your comments have been addressed, can you 
help to re-check it ? thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xintongsong closed pull request #22311: [FLINK-31652][k8s] Handle the deleted event in case pod is deleted during the pending phase

2023-04-02 Thread via GitHub


xintongsong closed pull request #22311: [FLINK-31652][k8s] Handle the deleted 
event in case pod is deleted during the pending phase
URL: https://github.com/apache/flink/pull/22311


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xintongsong commented on a diff in pull request #22311: [FLINK-31652][k8s] Handle the deleted event in case pod is deleted during the pending phase

2023-04-02 Thread via GitHub


xintongsong commented on code in PR #22311:
URL: https://github.com/apache/flink/pull/22311#discussion_r1155446163


##
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/KubernetesResourceManagerDriverTest.java:
##
@@ -184,6 +184,18 @@ void testOnPodDeleted() throws Exception {
 };
 }
 
+@Test
+void testOnPodDeletedWithDeletedEvent() throws Exception {

Review Comment:
   ```suggestion
   void testOnPodDeletedBeforeScheduled() throws Exception {
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-02 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707742#comment-17707742
 ] 

luoyuxia commented on FLINK-31689:
--

Please remember the state is not compatible as you have changed parallelism. 
So, it throw the exception.

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] FangYongs commented on a diff in pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-04-02 Thread via GitHub


FangYongs commented on code in PR #22289:
URL: https://github.com/apache/flink/pull/22289#discussion_r1155438551


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkConnection.java:
##
@@ -18,34 +18,67 @@
 
 package org.apache.flink.table.jdbc;
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+import org.apache.flink.table.gateway.service.context.DefaultContext;
+import org.apache.flink.table.jdbc.utils.DriverUtils;
+
 import java.sql.DatabaseMetaData;
 import java.sql.SQLClientInfoException;
 import java.sql.SQLException;
 import java.sql.SQLFeatureNotSupportedException;
 import java.sql.Statement;
+import java.util.Collections;
 import java.util.Properties;
+import java.util.UUID;
 
 /** Connection to flink sql gateway for jdbc driver. */
 public class FlinkConnection extends BaseConnection {
-private final DriverUri driverUri;
+private final Executor executor;
+private volatile boolean closed = false;
 
 public FlinkConnection(DriverUri driverUri) {
-this.driverUri = driverUri;
+// TODO Support default context from map to get gid of flink core for 
jdbc driver

Review Comment:
   Thanks @libenchao , added the jira link



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31690) The current key is not set for KeyedCoProcessOperator

2023-04-02 Thread Dian Fu (Jira)
Dian Fu created FLINK-31690:
---

 Summary: The current key is not set for KeyedCoProcessOperator
 Key: FLINK-31690
 URL: https://issues.apache.org/jira/browse/FLINK-31690
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Dian Fu
Assignee: Dian Fu


See https://apache-flink.slack.com/archives/C03G7LJTS2G/p1680294701254239 for 
more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] BoYiZhang commented on pull request #22204: [Flink 31170] [docs]The spelling error of the document word causes sq…

2023-04-02 Thread via GitHub


BoYiZhang commented on PR #22204:
URL: https://github.com/apache/flink/pull/22204#issuecomment-1493542108

   This is my first PR, please help me find out where it doesn't meet the 
regulations ...
   @reswqa @healchow


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] libenchao commented on a diff in pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-04-02 Thread via GitHub


libenchao commented on code in PR #22289:
URL: https://github.com/apache/flink/pull/22289#discussion_r1155415681


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkConnection.java:
##
@@ -18,34 +18,67 @@
 
 package org.apache.flink.table.jdbc;
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+import org.apache.flink.table.gateway.service.context.DefaultContext;
+import org.apache.flink.table.jdbc.utils.DriverUtils;
+
 import java.sql.DatabaseMetaData;
 import java.sql.SQLClientInfoException;
 import java.sql.SQLException;
 import java.sql.SQLFeatureNotSupportedException;
 import java.sql.Statement;
+import java.util.Collections;
 import java.util.Properties;
+import java.util.UUID;
 
 /** Connection to flink sql gateway for jdbc driver. */
 public class FlinkConnection extends BaseConnection {
-private final DriverUri driverUri;
+private final Executor executor;
+private volatile boolean closed = false;
 
 public FlinkConnection(DriverUri driverUri) {
-this.driverUri = driverUri;
+// TODO Support default context from map to get gid of flink core for 
jdbc driver

Review Comment:
   It would be great if you also add the JIra link in the comment.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] libenchao commented on a diff in pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-04-02 Thread via GitHub


libenchao commented on code in PR #22289:
URL: https://github.com/apache/flink/pull/22289#discussion_r1155415681


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkConnection.java:
##
@@ -18,34 +18,67 @@
 
 package org.apache.flink.table.jdbc;
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+import org.apache.flink.table.gateway.service.context.DefaultContext;
+import org.apache.flink.table.jdbc.utils.DriverUtils;
+
 import java.sql.DatabaseMetaData;
 import java.sql.SQLClientInfoException;
 import java.sql.SQLException;
 import java.sql.SQLFeatureNotSupportedException;
 import java.sql.Statement;
+import java.util.Collections;
 import java.util.Properties;
+import java.util.UUID;
 
 /** Connection to flink sql gateway for jdbc driver. */
 public class FlinkConnection extends BaseConnection {
-private final DriverUri driverUri;
+private final Executor executor;
+private volatile boolean closed = false;
 
 public FlinkConnection(DriverUri driverUri) {
-this.driverUri = driverUri;
+// TODO Support default context from map to get gid of flink core for 
jdbc driver

Review Comment:
   It would be great if you also add the JIra link in the comment.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] paul8263 commented on pull request #5: [FLINK-25916][connector-kafka] Using upsert-kafka with a flush buffer…

2023-04-02 Thread via GitHub


paul8263 commented on PR #5:
URL: 
https://github.com/apache/flink-connector-kafka/pull/5#issuecomment-1493499385

   Hi @tzulitai ,
   
   Could you please help approve the workflow? It seems that the CI process 
won't run anymore.
   
   I tried pushing the same commit to both this project and the legacy Flink 
project, but I got different errors. May take some time to figure it out.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31108) Use StreamARN for API calls in Kinesis Connector

2023-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31108:
---
Labels: pull-request-available  (was: )

> Use StreamARN for API calls in Kinesis Connector
> 
>
> Key: FLINK-31108
> URL: https://issues.apache.org/jira/browse/FLINK-31108
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.3, 1.16.1
>Reporter: Hong Liang Teoh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.1
>
>
> Currently, the FlinkKinesisConsumer (Polling + EFO) + FlinkKinesisProducer 
> uses the stream name during API calls
> We want to change this to the StreamARN. There are two reasons for this:
>  - This allows lower latency calls to the Kinesis endpoint for GetRecords API
>  - Paves the way for allowing user target cross-account streams without 
> assume role (i.e. IAM role in account A but target stream in account B)
>  
> The APIs that are currently called:
>  * ListShards
>  * GetShardIterator
>  * GetRecords
>  * DescribeStream
>  * DescribeStreamSummary
>  * DescribeStreamConsumer (already uses StreamARN)
>  * RegisterStreamConsumer (already uses StreamARN)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] hlteoh37 opened a new pull request, #64: [FLINK-31108][Connectors/Kinesis] Use Stream ARN instead of Stream Na…

2023-04-02 Thread via GitHub


hlteoh37 opened a new pull request, #64:
URL: https://github.com/apache/flink-connector-aws/pull/64

   …me for Kinesis API calls
   
   ## Purpose of the change
   Modifies Kinesis Source to use Stream ARN instead of Stream name.
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   - Added unit tests
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? (not applicable / docs / JavaDocs / not 
documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mateczagany commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-04-02 Thread via GitHub


mateczagany commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1155350156


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -627,14 +637,42 @@ public Map getClusterInfo(Configuration 
conf) throws Exception {
 .toSeconds(),
 TimeUnit.SECONDS);
 
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_VERSION,
 dashboardConfiguration.getFlinkVersion());
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_REVISION,
 dashboardConfiguration.getFlinkRevision());
 }
-return runtimeVersion;
+
+// JobManager resource usage can be deduced from the CR
+var jmParameters =
+new KubernetesJobManagerParameters(
+conf, new 
KubernetesClusterClientFactory().getClusterSpecification(conf));
+var jmTotalCpu =
+jmParameters.getJobManagerCPU()
+* jmParameters.getJobManagerCPULimitFactor()
+* jmParameters.getReplicas();
+var jmTotalMemory =
+Math.round(
+jmParameters.getJobManagerMemoryMB()
+* Math.pow(1024, 2)
+* jmParameters.getJobManagerMemoryLimitFactor()
+* jmParameters.getReplicas());
+
+// TaskManager resource usage is best gathered from the REST API to 
get current replicas

Review Comment:
   I've pushed your requests and also extracted the logic to a new method so we 
could test it more easily without needing REST API, I just wasn't sure where to 
place the test, I'm not that familiar with the project structure yet :D 
   
   If you think the PR looks ok, please let me know where you think I should 
write a test `AbstractFlinkService#calculateClusterResourceMetrics`, and I will 
do that as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-02 Thread jirawech.s (Jira)
jirawech.s created FLINK-31689:
--

 Summary: Filesystem sink fails when parallelism of compactor 
operator changed
 Key: FLINK-31689
 URL: https://issues.apache.org/jira/browse/FLINK-31689
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.16.1
Reporter: jirawech.s
 Attachments: HelloFlinkHadoopSink.java

I encounter this error when i tried to use Filesystem sink with Table SQL. I 
have not tested with Datastream API tho. You may refers to the error as below
{code:java}
// code placeholder
java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:864)
at 
org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
at 
org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:750) {code}
I cannot attach the full reproducible code here, but you may follow my pseudo 
code in attachment and reproducible steps below
1. Create Kafka source

2. Set state.savepoints.dir

3. Set Job parallelism to 1

4. Create FileSystem Sink

5. Run the job and trigger savepoint with API
{noformat}
curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
false}'{noformat}
{color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2023-04-02 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707680#comment-17707680
 ] 

Qingsheng Ren commented on FLINK-28440:
---

Redirected to here from FLINK-30107:

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47803=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=9691

> EventTimeWindowCheckpointingITCase failed with restore
> --
>
> Key: FLINK-28440
> URL: https://issues.apache.org/jira/browse/FLINK-28440
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Yanfei Lei
>Priority: Critical
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.16.2, 1.18.0
>
> Attachments: image-2023-02-01-00-51-54-506.png, 
> image-2023-02-01-01-10-01-521.png, image-2023-02-01-01-19-12-182.png, 
> image-2023-02-01-16-47-23-756.png, image-2023-02-01-16-57-43-889.png, 
> image-2023-02-02-10-52-56-599.png, image-2023-02-03-10-09-07-586.png, 
> image-2023-02-03-12-03-16-155.png, image-2023-02-03-12-03-56-614.png
>
>
> {code:java}
> Caused by: java.lang.Exception: Exception while creating 
> StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:256)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:268)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:722)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:698)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:665)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for WindowOperator_0a448493b4782967b150582570326227_(2/4) from 
> any of the 1 provided restore options.
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:353)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:165)
>   ... 11 more
> Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: 
> /tmp/junit1835099326935900400/junit1113650082510421526/52ee65b7-033f-4429-8ddd-adbe85e27ced
>  (No such file or directory)
>   at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.advance(StateChangelogHandleStreamHandleReader.java:87)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.hasNext(StateChangelogHandleStreamHandleReader.java:69)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.readBackendHandle(ChangelogBackendRestoreOperation.java:96)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:75)
>   at 
> org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:92)
>   at 
> org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:336)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
>   ... 13 more
> Caused by: 

[jira] [Commented] (FLINK-26974) Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure

2023-04-02 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707679#comment-17707679
 ] 

Qingsheng Ren commented on FLINK-26974:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47803=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=29211

> Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure
> -
>
> Key: FLINK-26974
> URL: https://issues.apache.org/jira/browse/FLINK-26974
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> Mar 31 10:49:17 === FAILURES 
> ===
> Mar 31 10:49:17 __ 
> EmbeddedThreadDependencyTests.test_add_python_file __
> Mar 31 10:49:17 
> Mar 31 10:49:17 self = 
>  testMethod=test_add_python_file>
> Mar 31 10:49:17 
> Mar 31 10:49:17 def test_add_python_file(self):
> Mar 31 10:49:17 python_file_dir = os.path.join(self.tempdir, 
> "python_file_dir_" + str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir)
> Mar 31 10:49:17 python_file_path = os.path.join(python_file_dir, 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nraise 
> Exception('This function should not be called!')")
> Mar 31 10:49:17 self.t_env.add_python_file(python_file_path)
> Mar 31 10:49:17 
> Mar 31 10:49:17 python_file_dir_with_higher_priority = os.path.join(
> Mar 31 10:49:17 self.tempdir, "python_file_dir_" + 
> str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir_with_higher_priority)
> Mar 31 10:49:17 python_file_path_higher_priority = 
> os.path.join(python_file_dir_with_higher_priority,
> Mar 31 10:49:17 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path_higher_priority, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nreturn a + 2")
> Mar 31 10:49:17 
> self.t_env.add_python_file(python_file_path_higher_priority)
> Mar 31 10:49:17 
> Mar 31 10:49:17 def plus_two(i):
> Mar 31 10:49:17 from test_dependency_manage_lib import add_two
> Mar 31 10:49:17 return add_two(i)
> Mar 31 10:49:17 
> Mar 31 10:49:17 self.t_env.create_temporary_system_function(
> Mar 31 10:49:17 "add_two", udf(plus_two, DataTypes.BIGINT(), 
> DataTypes.BIGINT()))
> Mar 31 10:49:17 table_sink = source_sink_utils.TestAppendSink(
> Mar 31 10:49:17 ['a', 'b'], [DataTypes.BIGINT(), 
> DataTypes.BIGINT()])
> Mar 31 10:49:17 self.t_env.register_table_sink("Results", table_sink)
> Mar 31 10:49:17 t = self.t_env.from_elements([(1, 2), (2, 5), (3, 
> 1)], ['a', 'b'])
> Mar 31 10:49:17 >   t.select(expr.call("add_two", t.a), 
> t.a).execute_insert("Results").wait()
> Mar 31 10:49:17 
> Mar 31 10:49:17 pyflink/table/tests/test_dependency.py:63: 
> Mar 31 10:49:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _ 
> Mar 31 10:49:17 pyflink/table/table_result.py:76: in wait
> Mar 31 10:49:17 get_method(self._j_table_result, "await")()
> Mar 31 10:49:17 
> .tox/py38-cython/lib/python3.8/site-packages/py4j/java_gateway.py:1321: in 
> __call__
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=34001=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=27239



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30299) TaskManagerRunnerTest fails with 239 exit code (i.e. FatalExitExceptionHandler was called)

2023-04-02 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707678#comment-17707678
 ] 

Qingsheng Ren commented on FLINK-30299:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47796=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=8321

> TaskManagerRunnerTest fails with 239 exit code (i.e. 
> FatalExitExceptionHandler was called)
> --
>
> Key: FLINK-30299
> URL: https://issues.apache.org/jira/browse/FLINK-30299
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> We're again experiencing 239 exit code being caused by 
> {{FatalExitExceptionHandler}} due class loading issues:
> {code}
> 04:53:03,365 [flink-akka.remote.default-remote-dispatcher-8] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'flink-akka.remote.default-remote-dispatcher-8' produced an uncaught 
> exception. Stopping the process...
> java.lang.NoClassDefFoundError: 
> akka/remote/transport/netty/NettyFutureBridge$$anon$1
> at 
> akka.remote.transport.netty.NettyFutureBridge$.apply(NettyTransport.scala:65) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> akka.remote.transport.netty.NettyTransport.$anonfun$associate$1(NettyTransport.scala:566)
>  ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:303) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
>  ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
>  ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49) 
> ~[flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
>  [flink-rpc-akka_b340b753-81f5-4e09-b083-5f8c92589fad.jar:1.16-SNAPSHOT]
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> [?:1.8.0_292]
> at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> [?:1.8.0_292]
> at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> [?:1.8.0_292]
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
> [?:1.8.0_292]
> Caused by: java.lang.ClassNotFoundException: 
> akka.remote.transport.netty.NettyFutureBridge$$anon$1
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382) 
> ~[?:1.8.0_292]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418) 
> ~[?:1.8.0_292]
> at 
> org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromComponentOnly(ComponentClassLoader.java:149)
>  ~[flink-core-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
> at 
> org.apache.flink.core.classloading.ComponentClassLoader.loadClass(ComponentClassLoader.java:112)
>  ~[flink-core-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351) 
> ~[?:1.8.0_292]
> ... 16 more
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43694=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=8319
> I created this as a follow-up of FLINK-26037 becasue we repurposed it and 
> fixed a bug in FLINK-26037. But it looks like both are being caused by the 
> same issue.



--
This 

[GitHub] [flink] flinkbot commented on pull request #22321: [FLINK-31688][docs] Fix the broken links in docs for Azure Table Storage

2023-04-02 Thread via GitHub


flinkbot commented on PR #22321:
URL: https://github.com/apache/flink/pull/22321#issuecomment-1493343603

   
   ## CI report:
   
   * a7cd7d1f5b2fbf437caec2f52a00dc72e72b05b1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31688) Broken links in docs for Azure Table Storage

2023-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31688:
---
Labels: pull-request-available  (was: )

> Broken links in docs for Azure Table Storage
> 
>
> Key: FLINK-31688
> URL: https://issues.apache.org/jira/browse/FLINK-31688
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>
> The doc page of 
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/dataset/formats/azure_table_storage/
>  has a broken links, we should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22321: [FLINK-31688][docs] Fix the broken links in docs for Azure Table Storage

2023-04-02 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22321:
URL: https://github.com/apache/flink/pull/22321

   
   
   ## What is the purpose of the change
   
   *Fix the broken links in docs for Azure Table Storage*
   
   
   ## Brief change log
   
 - *Fix the broken links in docs for Azure Table Storage*
   
   
   ## Verifying this change
   
   This change is a code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22320: [FLINK-31664][Table SQL/API] Add ARRAY_INTERSECT supported in SQL & Table API

2023-04-02 Thread via GitHub


flinkbot commented on PR #22320:
URL: https://github.com/apache/flink/pull/22320#issuecomment-1493338855

   
   ## CI report:
   
   * f86f9fd01d402441cdacd977f71f52461e10fa45 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31688) Broken links in docs for Azure Table Storage

2023-04-02 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-31688:
--
Description: The doc page of 
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/dataset/formats/azure_table_storage/
 has a broken links, we should fix it.

> Broken links in docs for Azure Table Storage
> 
>
> Key: FLINK-31688
> URL: https://issues.apache.org/jira/browse/FLINK-31688
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>
> The doc page of 
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/dataset/formats/azure_table_storage/
>  has a broken links, we should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31688) Broken links in docs for Azure Table Storage

2023-04-02 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan reassigned FLINK-31688:
-

Assignee: Yuxin Tan

> Broken links in docs for Azure Table Storage
> 
>
> Key: FLINK-31688
> URL: https://issues.apache.org/jira/browse/FLINK-31688
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31688) Broken links in docs for Azure Table Storage

2023-04-02 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-31688:
-

 Summary: Broken links in docs for Azure Table Storage
 Key: FLINK-31688
 URL: https://issues.apache.org/jira/browse/FLINK-31688
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.18.0
Reporter: Yuxin Tan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31664:
---
Labels: pull-request-available  (was: )

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zzzzzzzs opened a new pull request, #22320: [FLINK-31664][Table SQL/API] Add ARRAY_INTERSECT supported in SQL & Table API

2023-04-02 Thread via GitHub


zzzs opened a new pull request, #22320:
URL: https://github.com/apache/flink/pull/22320

   
   
   ## What is the purpose of the change
   
   Add `array_intersect(array1, array2)` SQL function.
   
   - `array1`: An ARRAY of any type with comparable elements.
   - `array2`: n ARRAY of elements sharing a least common type with the 
elements of `array1`.
   
   Returns an ARRAY of matching type to array1 with no duplicates and elements 
contained in both array1 and array2.
   
   Examples:
   ```sql
   SELECT array_intersect(array(1, 2, 3), array(1, 3, 3, 5));
   [1,3]
   ```
   See also
   https://spark.apache.org/docs/latest/api/sql/index.html#array_intersect
   
   ## Brief change log
   
   Add ARRAY_INTERSECT supported in SQL & Table API
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
CollectionFunctionsITCase#arrayIntersectTestCases.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] FangYongs commented on pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-04-02 Thread via GitHub


FangYongs commented on PR #22289:
URL: https://github.com/apache/flink/pull/22289#issuecomment-1493319117

   Thanks @libenchao , I have rebased master and updated this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Aitozi commented on a diff in pull request #22301: [FLINK-31426][table] Upgrade the deprecated UniqueConstraint to the n…

2023-04-02 Thread via GitHub


Aitozi commented on code in PR #22301:
URL: https://github.com/apache/flink/pull/22301#discussion_r1155295984


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/api/TableSchemaTest.java:
##
@@ -328,7 +328,7 @@ void testPrimaryKeyPrinting() {
 + " |-- f0: BIGINT NOT NULL\n"
 + " |-- f1: STRING NOT NULL\n"
 + " |-- f2: DOUBLE NOT NULL\n"
-+ " |-- CONSTRAINT pk PRIMARY KEY (f0, f2)\n");
++ " |-- CONSTRAINT `pk` PRIMARY KEY (`f0`, 
`f2`) NOT ENFORCED\n");

Review Comment:
   Why this changed ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mbalassi commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-04-02 Thread via GitHub


mbalassi commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1155274830


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -627,14 +637,42 @@ public Map getClusterInfo(Configuration 
conf) throws Exception {
 .toSeconds(),
 TimeUnit.SECONDS);
 
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_VERSION,
 dashboardConfiguration.getFlinkVersion());
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_REVISION,
 dashboardConfiguration.getFlinkRevision());
 }
-return runtimeVersion;
+
+// JobManager resource usage can be deduced from the CR
+var jmParameters =
+new KubernetesJobManagerParameters(
+conf, new 
KubernetesClusterClientFactory().getClusterSpecification(conf));
+var jmTotalCpu =
+jmParameters.getJobManagerCPU()
+* jmParameters.getJobManagerCPULimitFactor()
+* jmParameters.getReplicas();
+var jmTotalMemory =
+Math.round(
+jmParameters.getJobManagerMemoryMB()
+* Math.pow(1024, 2)
+* jmParameters.getJobManagerMemoryLimitFactor()
+* jmParameters.getReplicas());
+
+// TaskManager resource usage is best gathered from the REST API to 
get current replicas

Review Comment:
   Thanks @mateczagany, this approach looks good. If you have the bandwidth 
would you mind pushing your suggestions to this PR branch so that the commit 
can be attributed to you?  I have invited you as a collaborator to my fork, 
you might need to accept that.
   
   I would ask the following if you have the time:
   
   1. Get resource configuration from the config as you suggested uniformly for 
JMs and TMs
   2. Get JM replicas from config, TM replicas from the REST API (we are trying 
to be careful with the TM replicas because we foresee that we might be changing 
things dynamically there via the autoscaler soon)
   3. Add a test to `FlinkDeploymentMetricsTest` that verifies that given that 
the `status.clusterInfo` is properly filled out we fill out the metrics 
properly.
   
   Currently we do not have meaningful test for creating the clusterInfo and 
since we are relying on the application's REST API I do not see an easy way of 
testing it properly, so I would accept this change without that (but it might 
merit a separate JIRA).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Aitozi commented on a diff in pull request #22179: [FLINK-31380][table] FLIP-297: Support enhanced show catalogs syntax

2023-04-02 Thread via GitHub


Aitozi commented on code in PR #22179:
URL: https://github.com/apache/flink/pull/22179#discussion_r1154263645


##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowCatalogs.java:
##
@@ -28,15 +31,61 @@
 
 import java.util.Collections;
 import java.util.List;
+import java.util.Objects;
+
+import static java.util.Objects.requireNonNull;
 
 /** SHOW CATALOGS sql call. */
 public class SqlShowCatalogs extends SqlCall {
 
+// different like type such as like, ilike
+protected final SqlLikeType likeType;
+protected final boolean notLike;
+protected final SqlCharStringLiteral likeLiteral;
+
 public static final SqlSpecialOperator OPERATOR =
 new SqlSpecialOperator("SHOW CATALOGS", SqlKind.OTHER);
 
-public SqlShowCatalogs(SqlParserPos pos) {
+public SqlShowCatalogs(
+SqlParserPos pos, String likeType, SqlCharStringLiteral 
likeLiteral, boolean notLike) {
 super(pos);
+if (likeType != null) {
+this.likeType = SqlLikeType.of(likeType);
+this.likeLiteral = requireNonNull(likeLiteral, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likeLiteral = null;
+this.notLike = false;
+}
+}
+
+public SqlLikeType getLikeType() {
+return likeType;
+}
+
+public boolean isLike() {
+return likeType == SqlLikeType.LIKE;
+}
+
+public boolean isILike() {
+return likeType == SqlLikeType.ILIKE;
+}
+
+public boolean isWithLike() {
+return isLike() || isILike();
+}
+
+public SqlCharStringLiteral getLikeLiteral() {

Review Comment:
   not used



##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowCatalogsOperation.java:
##
@@ -19,21 +19,90 @@
 package org.apache.flink.table.operations;
 
 import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.functions.SqlLikeUtils;
+import org.apache.flink.table.operations.utils.OperationLikeType;
 
+import java.util.Arrays;
+
+import static java.util.Objects.requireNonNull;
 import static 
org.apache.flink.table.api.internal.TableResultUtils.buildStringArrayResult;
 
 /** Operation to describe a SHOW CATALOGS statement. */
 public class ShowCatalogsOperation implements ShowOperation {
 
+// different like type such as like, ilike
+private final OperationLikeType likeType;
+private final boolean notLike;
+private final String likePattern;
+
+public ShowCatalogsOperation(String likeType, String likePattern, boolean 
notLike) {
+if (likeType != null) {
+this.likeType = OperationLikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
+public OperationLikeType getLikeType() {
+return likeType;
+}
+
+public boolean isLike() {
+return likeType == OperationLikeType.LIKE;
+}
+
+public boolean isIlike() {
+return likeType == OperationLikeType.ILIKE;
+}
+
+public boolean isWithLike() {
+return isLike() || isIlike();
+}
+
+public String getLikePattern() {

Review Comment:
   not used



##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/SqlLikeType.java:
##
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser;
+
+/** Like types utils. */
+public enum SqlLikeType {
+/** sql like pattern, case sensitive. */
+LIKE,
+/** sql like pattern, case insensitive. */
+ILIKE,
+/** use regex to match (currently flink not support it). */

Review Comment:
   Maybe we can add this until supported ?



##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowCatalogs.java:
##
@@ -28,15 +31,61 @@
 
 import java.util.Collections;
 import java.util.List;
+import