[GitHub] [flink] flinkbot commented on pull request #22317: [FLINK-29334] Remove releaseAndTryRemoveAll in StateHandleStore

2023-03-31 Thread via GitHub


flinkbot commented on PR #22317:
URL: https://github.com/apache/flink/pull/22317#issuecomment-1492842984

   
   ## CI report:
   
   * 90568d4e00904b802d69b596ac2c2cdde2e92429 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29334) StateHandleStore#releaseAndTryRemoveAll is not used and can be removed

2023-03-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29334:
---
Labels: pull-request-available starer  (was: starer)

> StateHandleStore#releaseAndTryRemoveAll is not used and can be removed
> --
>
> Key: FLINK-29334
> URL: https://issues.apache.org/jira/browse/FLINK-29334
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Wencong Liu
>Priority: Minor
>  Labels: pull-request-available, starer
>
> {{StateHandleStore#releaseAndTryRemoveAll}} isn't used in production code. 
> There is also not a real reason to do a final cleanup. We should clean up 
> component at the right location that than doing a wipe-out at the end.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] WencongLiu opened a new pull request, #22317: [FLINK-29334] Remove releaseAndTryRemoveAll in StateHandleStore

2023-03-31 Thread via GitHub


WencongLiu opened a new pull request, #22317:
URL: https://github.com/apache/flink/pull/22317

   ## What is the purpose of the change
   
   *Remove releaseAndTryRemoveAll in StateHandleStore.*
   
   
   ## Brief change log
   
 - *Remove releaseAndTryRemoveAll in StateHandleStore.*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zzzzzzzs commented on pull request #20330: [FLINK-26940][Table SQL/API] Add SUBSTRING_INDEX supported in SQL & Table API

2023-03-31 Thread via GitHub


zzzs commented on PR #20330:
URL: https://github.com/apache/flink/pull/20330#issuecomment-1492818311

   @huwh Could you please help to review this PR in your free time? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-31029) KBinsDiscretizer gives wrong bin edges in 'quantile' strategy when input data contains only 2 distinct values

2023-03-31 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved FLINK-31029.
--
Resolution: Fixed

> KBinsDiscretizer gives wrong bin edges in 'quantile' strategy when input data 
> contains only 2 distinct values
> -
>
> Key: FLINK-31029
> URL: https://issues.apache.org/jira/browse/FLINK-31029
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Machine Learning
>Reporter: Fan Hong
>Assignee: Zhipeng Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When one input column contains only 2 distinct values and their counts are 
> same, KBinsDiscretizer transforms this column to all 0s using `quantile` 
> strategy. An example of such column is `[0, 0, 0, 1, 1, 1]`.
> When the 2 distinct values have different counts, the transformed values are 
> also all 0s, which cannot distinguish them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31029) KBinsDiscretizer gives wrong bin edges in 'quantile' strategy when input data contains only 2 distinct values

2023-03-31 Thread Dong Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707487#comment-17707487
 ] 

Dong Lin commented on FLINK-31029:
--

Merged to apache/flink-ml master branch 5dacbd97429a525b0f7e81931f55f3d87f79de57

> KBinsDiscretizer gives wrong bin edges in 'quantile' strategy when input data 
> contains only 2 distinct values
> -
>
> Key: FLINK-31029
> URL: https://issues.apache.org/jira/browse/FLINK-31029
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Machine Learning
>Reporter: Fan Hong
>Assignee: Zhipeng Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When one input column contains only 2 distinct values and their counts are 
> same, KBinsDiscretizer transforms this column to all 0s using `quantile` 
> strategy. An example of such column is `[0, 0, 0, 1, 1, 1]`.
> When the 2 distinct values have different counts, the transformed values are 
> also all 0s, which cannot distinguish them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] lindong28 merged pull request #222: [FLINK-31029] Fix bug when using quantile in KbinsDiscretizer

2023-03-31 Thread via GitHub


lindong28 merged PR #222:
URL: https://github.com/apache/flink-ml/pull/222


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] lindong28 commented on pull request #222: [FLINK-31029] Fix bug when using quantile in KbinsDiscretizer

2023-03-31 Thread via GitHub


lindong28 commented on PR #222:
URL: https://github.com/apache/flink-ml/pull/222#issuecomment-1492804461

   Thanks for the PR. LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] lindong28 commented on pull request #221: [FLINK-31189] Add HasMaxIndexNum param to StringIndexer

2023-03-31 Thread via GitHub


lindong28 commented on PR #221:
URL: https://github.com/apache/flink-ml/pull/221#issuecomment-1492803514

   Thanks for the PR. LGTM. Please feel free to merge this PR after resolving 
the above comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31685) Checkpoint job folder not deleted after job is cancelled

2023-03-31 Thread Sergio Sainz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Sainz updated FLINK-31685:
-
Description: 
When flink job is being checkpointed, and after the job is cancelled, the 
checkpoint is indeed deleted (as per 
{{{}execution.checkpointing.externalized-checkpoint-retention: 
DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
 
[sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
dbc957868c08ebeb100d708bbd057593
04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
dc8e04b02c9d8a1bc04b21d2c8f21f74
05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
dfb2df1c25056e920d41c94b659dcdab
09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287

All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , are 
empty ~

 

*Expected behaviour:*

The job folder id should also be deleted.

  was:
When flink job is being checkpointed, and after the job is cancelled, the 
checkpoint is indeed deleted (as per 
{{{}execution.checkpointing.externalized-checkpoint-retention: 
DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
 
sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
dbc957868c08ebeb100d708bbd057593
04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
dc8e04b02c9d8a1bc04b21d2c8f21f74
05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
dfb2df1c25056e920d41c94b659dcdab
09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287

All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , are 
empty ~

 

*Expected behaviour:*

The job folder id should also be deleted.


> Checkpoint job folder not deleted after job is cancelled
> 
>
> Key: FLINK-31685
> URL: https://issues.apache.org/jira/browse/FLINK-31685
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.1
>Reporter: Sergio Sainz
>Priority: Major
>
> When flink job is being checkpointed, and after the job is cancelled, the 
> checkpoint is indeed deleted (as per 
> {{{}execution.checkpointing.externalized-checkpoint-retention: 
> DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
>  
> [sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
> 01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
> 78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
> dbc957868c08ebeb100d708bbd057593
> 04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
> 79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
> dc8e04b02c9d8a1bc04b21d2c8f21f74
> 05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
> 7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
> dfb2df1c25056e920d41c94b659dcdab
> 09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
> 7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287
> All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , 
> are empty ~
>  
> *Expected behaviour:*
> The job folder id should also be deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31685) Checkpoint job folder not deleted after job is cancelled

2023-03-31 Thread Sergio Sainz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Sainz updated FLINK-31685:
-
Description: 
When flink job is being checkpointed, and after the job is cancelled, the 
checkpoint is indeed deleted (as per 
{{{}execution.checkpointing.externalized-checkpoint-retention: 
DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
 
sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
dbc957868c08ebeb100d708bbd057593
04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
dc8e04b02c9d8a1bc04b21d2c8f21f74
05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
dfb2df1c25056e920d41c94b659dcdab
09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287

All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , are 
empty ~

 

*Expected behaviour:*

The job folder id should also be deleted.

  was:
When flink job is being checkpointed, and after the job is cancelled, the 
checkpoint is indeed deleted (as per 
{{{}execution.checkpointing.externalized-checkpoint-retention: 
DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
 {color:var(--ds-text, #172b4d)}{{{color:var(--ds-text-subtlest, #505f79) 
}1{color}[sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ 
ls{color:var(--ds-text-subtlest, #505f79) 
}2{color}01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
dbc957868c08ebeb100d708bbd057593{color:var(--ds-text-subtlest, #505f79) 
}3{color}04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
dc8e04b02c9d8a1bc04b21d2c8f21f74{color:var(--ds-text-subtlest, #505f79) 
}4{color}05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
dfb2df1c25056e920d41c94b659dcdab{color:var(--ds-text-subtlest, #505f79) 
}5{color}09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287}}{color}
All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , are 
empty ~

 

*Expected behaviour:*

The job folder id should also be deleted.


> Checkpoint job folder not deleted after job is cancelled
> 
>
> Key: FLINK-31685
> URL: https://issues.apache.org/jira/browse/FLINK-31685
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.1
>Reporter: Sergio Sainz
>Priority: Major
>
> When flink job is being checkpointed, and after the job is cancelled, the 
> checkpoint is indeed deleted (as per 
> {{{}execution.checkpointing.externalized-checkpoint-retention: 
> DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
>  
> sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
> 01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
> 78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
> dbc957868c08ebeb100d708bbd057593
> 04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
> 79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
> dc8e04b02c9d8a1bc04b21d2c8f21f74
> 05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
> 7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
> dfb2df1c25056e920d41c94b659dcdab
> 09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
> 7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287
> All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , 
> are empty ~
>  
> *Expected behaviour:*
> The job folder id should also be deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31685) Checkpoint job folder not deleted after job is cancelled

2023-03-31 Thread Sergio Sainz (Jira)
Sergio Sainz created FLINK-31685:


 Summary: Checkpoint job folder not deleted after job is cancelled
 Key: FLINK-31685
 URL: https://issues.apache.org/jira/browse/FLINK-31685
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.16.1
Reporter: Sergio Sainz


When flink job is being checkpointed, and after the job is cancelled, the 
checkpoint is indeed deleted (as per 
{{{}execution.checkpointing.externalized-checkpoint-retention: 
DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
 {color:var(--ds-text, #172b4d)}{{{color:var(--ds-text-subtlest, #505f79) 
}1{color}[sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ 
ls{color:var(--ds-text-subtlest, #505f79) 
}2{color}01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
dbc957868c08ebeb100d708bbd057593{color:var(--ds-text-subtlest, #505f79) 
}3{color}04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
dc8e04b02c9d8a1bc04b21d2c8f21f74{color:var(--ds-text-subtlest, #505f79) 
}4{color}05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
dfb2df1c25056e920d41c94b659dcdab{color:var(--ds-text-subtlest, #505f79) 
}5{color}09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287}}{color}
All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , are 
empty ~

 

*Expected behaviour:*

The job folder id should also be deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] cnauroth commented on pull request #22281: [FLINK-31631][FileSystems] Upgrade GCS connector to 2.2.11.

2023-03-31 Thread via GitHub


cnauroth commented on PR #22281:
URL: https://github.com/apache/flink/pull/22281#issuecomment-1492710592

   @MartijnVisser , thanks for the review and commit!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31661) Add parity between `ROW` value function and it's type declaration

2023-03-31 Thread Mohsen Rezaei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707431#comment-17707431
 ] 

Mohsen Rezaei commented on FLINK-31661:
---

Thanks for bringing that up [~Sergey Nuyanzin], and I should've mentioned in 
the description that I'm looking to contribute to a native support for the 
field names, one without a secondary function call, if possible. The {{CAST}} 
solution is possible, but suboptimal.

[~Sergey Nuyanzin], would you consider this a small change or a bigger feature 
for the `ROW()` value function?

> Add parity between `ROW` value function and it's type declaration
> -
>
> Key: FLINK-31661
> URL: https://issues.apache.org/jira/browse/FLINK-31661
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Mohsen Rezaei
>Priority: Critical
>
> Currently the [{{ROW}} table 
> type|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  allows for a name and type, and optionally a description, but [its value 
> constructing 
> function|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  only supports an arbitrary list of expressions.
> This prevents users from providing human-readable names for the fields 
> provded to a {{ROW()}} or {{()}} value function call, resulting in 
> system-defined {{EXPR$n}} names that lose their meaning as they are mixed in 
> with other queries.
> For example, the following SQL query:
> {code}
> SELECT (id, name) as struct FROM t1;
> {code}
> results in the following consumable data type for the `ROW` column:
> {code}
> ROW<`EXPR$0` DECIMAL(10, 2), `EXPR$1` STRING> NOT NULL
> {code}
> I'd be happy to contribute to this change, but I need some guidance and 
> pointers on where to start making changes for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] nateab commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


nateab commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1492517797

   @zentol thanks for the info, I spoke with @pnowojski and we are happy to try 
to get this merged in 1.17 and leave master for now. 
   
   I looked at your alternate solution from the jira
   
   > Bundling json-path in the table-planner also does the trick btw; at least 
for the kafka tests.
   and tried adding 
   
   ```

com.jayway.jsonpath
json-path
${jsonpath.version}

   ```
   in various poms like in `flink-table-planner-loader-bundle`, 
`flink-table-planner-loader`, `flink-table-planner_${scala.binary.version}`  
but wasn't able to get the kafka table ITCases to pass, am i missing something?
   
   For development it's great though that we have a few workarounds available. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31684) Autoscaler metrics are only visible after metric window is full

2023-03-31 Thread Maximilian Michels (Jira)
Maximilian Michels created FLINK-31684:
--

 Summary: Autoscaler metrics are only visible after metric window 
is full
 Key: FLINK-31684
 URL: https://issues.apache.org/jira/browse/FLINK-31684
 Project: Flink
  Issue Type: Improvement
  Components: Autoscaler, Kubernetes Operator
Reporter: Maximilian Michels
Assignee: Maximilian Michels


The metrics get reported only after the metric window is full. This is not 
helpful for observability after rescaling. We need to make sure that metrics 
are reported even when the metric window is not yet full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] tzulitai closed pull request #22303: [FLINK-31305] fix error propagation bug in WriterCallback and use Tes…

2023-03-31 Thread via GitHub


tzulitai closed pull request #22303: [FLINK-31305] fix error propagation bug in 
WriterCallback and use Tes…
URL: https://github.com/apache/flink/pull/22303


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] tzulitai commented on pull request #22303: [FLINK-31305] fix error propagation bug in WriterCallback and use Tes…

2023-03-31 Thread via GitHub


tzulitai commented on PR #22303:
URL: https://github.com/apache/flink/pull/22303#issuecomment-1492468269

   +1, LGTM. Thanks again for fixing this @mas-chen!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31676) Pulsar connector should not rely on Flink Shaded

2023-03-31 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707372#comment-17707372
 ] 

Mason Chen commented on FLINK-31676:


Got it. Thanks!

> Pulsar connector should not rely on Flink Shaded
> 
>
> Key: FLINK-31676
> URL: https://issues.apache.org/jira/browse/FLINK-31676
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.0.0
>
>
> The Pulsar connector currently depends on Flink Shaded for Guava. However, 
> externalized connectors must not rely on flink-shaded. This will just not be 
> possible if we want them to work against different Flink versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31661) Add parity between `ROW` value function and it's type declaration

2023-03-31 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707367#comment-17707367
 ] 

Sergey Nuyanzin commented on FLINK-31661:
-

This should help to have field names, i guess
{code:sql}
select cast((1, 'name') as row(id int, name string));
{code}

> Add parity between `ROW` value function and it's type declaration
> -
>
> Key: FLINK-31661
> URL: https://issues.apache.org/jira/browse/FLINK-31661
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Mohsen Rezaei
>Priority: Critical
>
> Currently the [{{ROW}} table 
> type|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  allows for a name and type, and optionally a description, but [its value 
> constructing 
> function|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  only supports an arbitrary list of expressions.
> This prevents users from providing human-readable names for the fields 
> provded to a {{ROW()}} or {{()}} value function call, resulting in 
> system-defined {{EXPR$n}} names that lose their meaning as they are mixed in 
> with other queries.
> For example, the following SQL query:
> {code}
> SELECT (id, name) as struct FROM t1;
> {code}
> results in the following consumable data type for the `ROW` column:
> {code}
> ROW<`EXPR$0` DECIMAL(10, 2), `EXPR$1` STRING> NOT NULL
> {code}
> I'd be happy to contribute to this change, but I need some guidance and 
> pointers on where to start making changes for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31680) Add priorityClassName to flink-operator's pods

2023-03-31 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-31680:
--

Assignee: Rafał Boniecki

> Add priorityClassName to flink-operator's pods
> --
>
> Key: FLINK-31680
> URL: https://issues.apache.org/jira/browse/FLINK-31680
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Rafał Boniecki
>Assignee: Rafał Boniecki
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.5.0
>
>
> I can't set pod priorityClassName for flink operator's pods using helm chart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31680) Add priorityClassName to flink-operator's pods

2023-03-31 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-31680.
--
Fix Version/s: kubernetes-operator-1.5.0
   Resolution: Fixed

merged to main 72926b8222e8b0b61c72f93afb869a8639a224e7

> Add priorityClassName to flink-operator's pods
> --
>
> Key: FLINK-31680
> URL: https://issues.apache.org/jira/browse/FLINK-31680
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Rafał Boniecki
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.5.0
>
>
> I can't set pod priorityClassName for flink operator's pods using helm chart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31680) Add priorityClassName to flink-operator's pods

2023-03-31 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-31680:
---
Issue Type: Improvement  (was: New Feature)

> Add priorityClassName to flink-operator's pods
> --
>
> Key: FLINK-31680
> URL: https://issues.apache.org/jira/browse/FLINK-31680
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Rafał Boniecki
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.5.0
>
>
> I can't set pod priorityClassName for flink operator's pods using helm chart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #559: [FLINK-31680] Add priorityClassName to flink-operator's pods

2023-03-31 Thread via GitHub


gyfora merged PR #559:
URL: https://github.com/apache/flink-kubernetes-operator/pull/559


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mateczagany commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-03-31 Thread via GitHub


mateczagany commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1154613982


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -627,14 +637,42 @@ public Map getClusterInfo(Configuration 
conf) throws Exception {
 .toSeconds(),
 TimeUnit.SECONDS);
 
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_VERSION,
 dashboardConfiguration.getFlinkVersion());
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_REVISION,
 dashboardConfiguration.getFlinkRevision());
 }
-return runtimeVersion;
+
+// JobManager resource usage can be deduced from the CR
+var jmParameters =
+new KubernetesJobManagerParameters(
+conf, new 
KubernetesClusterClientFactory().getClusterSpecification(conf));
+var jmTotalCpu =
+jmParameters.getJobManagerCPU()
+* jmParameters.getJobManagerCPULimitFactor()
+* jmParameters.getReplicas();
+var jmTotalMemory =
+Math.round(
+jmParameters.getJobManagerMemoryMB()
+* Math.pow(1024, 2)
+* jmParameters.getJobManagerMemoryLimitFactor()
+* jmParameters.getReplicas());
+
+// TaskManager resource usage is best gathered from the REST API to 
get current replicas

Review Comment:
   I tried to implement the same logic for `tmTotalCpu` as what you did the 
with `jmTotalCpu`, and I think it should be valid: `tmCpuRequest * 
tmCpuLimitFactor * numberOfTaskManagers`
   
   `tmCpuRequest` and `tmCpuLimitFactor` are accessible the same way as for the 
JM. Just retrieve `kubernetes.taskmanager.cpu` and 
`kubernetes.taskmanager.cpu.limit-factor` from the Flink config.
   
   I'm not sure about `numberOfTaskManagers`, in my test I just downloaded the 
number of TMs from the Flink REST API, maybe we could just use 
`FlinkUtils#getNumTaskManagers` instead.
   
   Code:
   ```
   var tmTotalCpu =
   tmHardwareDesc.get().count()
   * conf.getDouble(KubernetesConfigOptions.TASK_MANAGER_CPU)
   * 
conf.getDouble(KubernetesConfigOptions.TASK_MANAGER_CPU_LIMIT_FACTOR);
   ```
   
   Limit factors:
   ```
   kubernetes.taskmanager.cpu.limit-factor = 1.3
   kubernetes.jobmanager.cpu.limit-factor = 1.3
   ```
   
   Result:
   ```
   Job Manager:
 Replicas:2
 Resource:
   Cpu:  0.5
   Memory:   1g
   Task Manager:
 Replicas:2
 Resource:
   Cpu: 0.5
   Memory:  1g
   Status:
 Cluster Info:
   Total - Cpu:  2.6
   Total - Memory:   4294967296
   ```
   
   Do you think this could work?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] boring-cyborg[bot] commented on pull request #14: [hotfix] Update flink, flink-shaded, archunit, assertj, junit5, mockito, testcontainers

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #14:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/14#issuecomment-1492116987

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] snuyanzin opened a new pull request, #14: [hotfix] Update flink, flink-shaded, archunit, assertj, junit5, mockito, testcontainers

2023-03-31 Thread via GitHub


snuyanzin opened a new pull request, #14:
URL: https://github.com/apache/flink-connector-opensearch/pull/14

   The PR bumps dependencies
   
   flink from 1.16.1 to 1.17.0
   flink-shaded from 15.0 to 16.1 (same as in flink master)
   assertj from 3.21.0 to 3.24.2 (3.23.1 in flink master)
   archunit from 0.22.0 to 1.0.1 (1.0.0 in flink master)
   junit5 from 5.8.1 to 5.9.2 (5.9.1 in flink master)
   testcontainers from 1.17.2 to 1.17.6 (same as in flink master)
   mockito from 2.21.0 to 3.4.6 (same as in flink master)
   
   it also adds 
   `archRule.failOnEmptyShould = false` since there are breaking changes in 
archunit 0.23.0 https://github.com/TNG/ArchUnit/releases/tag/v0.23.0
   ```
   As mentioned in Enhancements/Core ArchRules will now by default reject 
evaluating if the set passed to the should-clause is empty. This will break 
existing rules that don't check any elements in their should-clause. You can 
restore the old behavior by setting the ArchUnit property 
archRule.failOnEmptyShould=false
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30609) Add ephemeral storage to CRD

2023-03-31 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707297#comment-17707297
 ] 

Gyula Fora commented on FLINK-30609:


[~pbharaj] I see you did not make too much progress on this, could 
[~ZhenqiuHuang] take over this?

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Prabcs
>Priority: Major
>  Labels: starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-statefun] dependabot[bot] commented on pull request #320: Bump jackson-databind from 2.13.2.2 to 2.13.4.1

2023-03-31 Thread via GitHub


dependabot[bot] commented on PR #320:
URL: https://github.com/apache/flink-statefun/pull/320#issuecomment-1492080758

   Superseded by #327.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun] dependabot[bot] closed pull request #320: Bump jackson-databind from 2.13.2.2 to 2.13.4.1

2023-03-31 Thread via GitHub


dependabot[bot] closed pull request #320: Bump jackson-databind from 2.13.2.2 
to 2.13.4.1
URL: https://github.com/apache/flink-statefun/pull/320


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun] dependabot[bot] opened a new pull request, #327: Bump jackson-databind from 2.13.2.2 to 2.13.4.2

2023-03-31 Thread via GitHub


dependabot[bot] opened a new pull request, #327:
URL: https://github.com/apache/flink-statefun/pull/327

   Bumps [jackson-databind](https://github.com/FasterXML/jackson) from 2.13.2.2 
to 2.13.4.2.
   
   Commits
   
   See full diff in https://github.com/FasterXML/jackson/commits;>compare view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.fasterxml.jackson.core:jackson-databind=maven=2.13.2.2=2.13.4.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-statefun/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31675) Deadlock in AWS Connectors following content-length AWS SDK exception

2023-03-31 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707284#comment-17707284
 ] 

Danny Cranmer commented on FLINK-31675:
---

Thanks for raising [~antoniovespoli] , I have assigned the issue to you

> Deadlock in AWS Connectors following content-length AWS SDK exception
> -
>
> Key: FLINK-31675
> URL: https://issues.apache.org/jira/browse/FLINK-31675
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Antonio Vespoli
>Assignee: Antonio Vespoli
>Priority: Major
> Fix For: aws-connector-3.1.0, 1.15.5, aws-connector-4.2.0
>
>
> Connector calls to AWS services can hang on a canceled future following a 
> content-length mismatch that isn't handled gracefully by the SDK:
>  
> {code:java}
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.FutureCancelledException:
>  java.io.IOException: Response had content-length of 31 bytes, but only 
> received 0 bytes before the connection was closed.
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.lambda$null$3(NettyRequestExecutor.java:136)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: Response had content-length of 31 bytes, but 
> only received 0 bytes before the connection was closed.
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.validateResponseContentLength(ResponseHandler.java:163)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$700(ResponseHandler.java:75)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:369)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.complete(HandlerPublisher.java:447)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.channelInactive(HandlerPublisher.java:430)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81)
> at 
> org.apache.flink.kinesis.shaded.io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901)
> at 
> 

[jira] [Assigned] (FLINK-31675) Deadlock in AWS Connectors following content-length AWS SDK exception

2023-03-31 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-31675:
-

Assignee: Antonio Vespoli

> Deadlock in AWS Connectors following content-length AWS SDK exception
> -
>
> Key: FLINK-31675
> URL: https://issues.apache.org/jira/browse/FLINK-31675
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Antonio Vespoli
>Assignee: Antonio Vespoli
>Priority: Major
> Fix For: aws-connector-3.1.0, 1.15.5, aws-connector-4.2.0
>
>
> Connector calls to AWS services can hang on a canceled future following a 
> content-length mismatch that isn't handled gracefully by the SDK:
>  
> {code:java}
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.FutureCancelledException:
>  java.io.IOException: Response had content-length of 31 bytes, but only 
> received 0 bytes before the connection was closed.
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.lambda$null$3(NettyRequestExecutor.java:136)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: Response had content-length of 31 bytes, but 
> only received 0 bytes before the connection was closed.
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.validateResponseContentLength(ResponseHandler.java:163)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$700(ResponseHandler.java:75)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:369)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.complete(HandlerPublisher.java:447)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.channelInactive(HandlerPublisher.java:430)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81)
> at 
> org.apache.flink.kinesis.shaded.io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
> at 
> org.apache.flink.kinesis.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901)
> at 
> 

[jira] [Comment Edited] (FLINK-31681) Network connection timeout between operators should trigger either network re-connection or job failover

2023-03-31 Thread Dong Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707281#comment-17707281
 ] 

Dong Lin edited comment on FLINK-31681 at 3/31/23 2:25 PM:
---

This happens with Flink version 1.15.1 when we were testing Flink ML with 
parallelism = 200.

Upgrading the internal Flink library and related connectors needed by Flink ML 
would take some time. Thus we have not tried to reproduce this issue with Flink 
1.17.

Thus I choose to write down the phenomena and the error message in this JIRA to 
make sure this issue will be tracked. I will close this JIRA if we can not 
reproduce the issue with the latest Flink version.


was (Author: lindong):
This happens with Flink version 1.15.1 when we were testing Flink ML with 
parallelism = 200.

Upgrading the internal Flink library and related connectors needed by Flink ML 
would take some time. Thus we have not tried to reproduce this issue with Flink 
1.17.

Thus I choose to write down the phenomenal and the error message in this JIRA 
to make sure this issue will be tracked. I will close this JIRA if we can not 
reproduce the issue with the latest Flink version.

> Network connection timeout between operators should trigger either network 
> re-connection or job failover
> 
>
> Key: FLINK-31681
> URL: https://issues.apache.org/jira/browse/FLINK-31681
> Project: Flink
>  Issue Type: Bug
>Reporter: Dong Lin
>Priority: Major
>
> If a network connection error occurs between two operators, the upstream 
> operator may log the following error message in the method 
> PartitionRequestQueue#handleException and subsequently close the connection. 
> When this happens, the Flink job may become stuck without completing or 
> failing. 
> To avoid this issue, we can either allow the upstream operator to reconnect 
> with the downstream operator, or enable job failover so that users can take 
> corrective action promptly.
> org.apache.flink.runtime.io.network.netty.PartitionRequestQueue - Encountered 
> error while consuming partitions 
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors#NativeIOException:
>  writeAccess(...) failed: Connection timed out.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31681) Network connection timeout between operators should trigger either network re-connection or job failover

2023-03-31 Thread Dong Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707281#comment-17707281
 ] 

Dong Lin commented on FLINK-31681:
--

This happens with Flink version 1.15.1 when we were testing Flink ML with 
parallelism = 200.

Upgrading the internal Flink library and related connectors needed by Flink ML 
would take some time. Thus we have not tried to reproduce this issue with Flink 
1.17.

Thus I choose to write down the phenomenal and the error message in this JIRA 
to make sure this issue will be tracked. I will close this JIRA if we can not 
reproduce the issue with the latest Flink version.

> Network connection timeout between operators should trigger either network 
> re-connection or job failover
> 
>
> Key: FLINK-31681
> URL: https://issues.apache.org/jira/browse/FLINK-31681
> Project: Flink
>  Issue Type: Bug
>Reporter: Dong Lin
>Priority: Major
>
> If a network connection error occurs between two operators, the upstream 
> operator may log the following error message in the method 
> PartitionRequestQueue#handleException and subsequently close the connection. 
> When this happens, the Flink job may become stuck without completing or 
> failing. 
> To avoid this issue, we can either allow the upstream operator to reconnect 
> with the downstream operator, or enable job failover so that users can take 
> corrective action promptly.
> org.apache.flink.runtime.io.network.netty.PartitionRequestQueue - Encountered 
> error while consuming partitions 
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors#NativeIOException:
>  writeAccess(...) failed: Connection timed out.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30719) flink-runtime-web failed due to a corrupted

2023-03-31 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707270#comment-17707270
 ] 

Sergey Nuyanzin commented on FLINK-30719:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47775=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=12999

> flink-runtime-web failed due to a corrupted 
> 
>
> Key: FLINK-30719
> URL: https://issues.apache.org/jira/browse/FLINK-30719
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend, Test Infrastructure, Tests
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=44954=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=12550
> The build failed due to a corrupted nodejs dependency:
> {code}
> [ERROR] The archive file 
> /__w/1/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
>  is corrupted and will be deleted. Please try the build again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31683) Align the outdated Chinese filesystem connector docs

2023-03-31 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-31683.
--
Resolution: Fixed

merged.
master: 370929c838fbac21cc51a225f3c690c1ee8931a8
release-1.17: 49db0c628e50182f160a99ec7bbd5148ed725792
release-1.16: 701fccf4556765c5c75eb7b7c66f0a4d9edfc957
release-1.15: 27f93da8f4a6813af5030691e72d132bb35ec1c0

> Align the outdated Chinese filesystem connector docs
> 
>
> Key: FLINK-31683
> URL: https://issues.apache.org/jira/browse/FLINK-31683
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, 1.15.5
>
>
> The current Chinese doc of the file system SQL connector is outdated from 
> Flink-1.15, we should fix it to avoid misunderstanding.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31676) Pulsar connector should not rely on Flink Shaded

2023-03-31 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31676.
--
Fix Version/s: pulsar-4.0.0
   Resolution: Fixed

Fixed in main: 104bdc378b4f4d12ffeaf550cccd2d5633bef58d

> Pulsar connector should not rely on Flink Shaded
> 
>
> Key: FLINK-31676
> URL: https://issues.apache.org/jira/browse/FLINK-31676
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.0.0
>
>
> The Pulsar connector currently depends on Flink Shaded for Guava. However, 
> externalized connectors must not rely on flink-shaded. This will just not be 
> possible if we want them to work against different Flink versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] MartijnVisser merged pull request #37: [FLINK-31676][Connector/Pulsar] Replace Shaded Guava from Flink with Shaded Guava from Pulsar

2023-03-31 Thread via GitHub


MartijnVisser merged PR #37:
URL: https://github.com/apache/flink-connector-pulsar/pull/37


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] MartijnVisser commented on pull request #37: [FLINK-31676][Connector/Pulsar] Replace Shaded Guava from Flink with Shaded Guava from Pulsar

2023-03-31 Thread via GitHub


MartijnVisser commented on PR #37:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/37#issuecomment-1491954797

   > how can we prevent dependencies on Flink Shade predictably?
   
   I don't think we can; it's a transitive dependency of Flink, which is needed 
in order to build/test the connector. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Myasuka merged pull request #22315: [FLINK-31683][docs-zh] Update the outdated Chinese filesystem connector docs

2023-03-31 Thread via GitHub


Myasuka merged PR #22315:
URL: https://github.com/apache/flink/pull/22315


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] WencongLiu commented on pull request #22121: [FLINK-27051] fix CompletedCheckpoint.DiscardObject.discard is not idempotent

2023-03-31 Thread via GitHub


WencongLiu commented on PR #22121:
URL: https://github.com/apache/flink/pull/22121#issuecomment-1491935013

   @XComp I'm interested in the background of your ticket. Based on the 
description, I think the key point of this ticket is that "CompletedCheckpoints 
are being discarded in CheckpointsCleaner". Could you provide the specific 
codepath for this? Additionally, I would like to learn more about "the contract 
of StateObject#discardState" . If these are clear, I would be happy to drive 
the entire issue. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gaborgsomogyi commented on a diff in pull request #22298: [FLINK-31656][runtime][security] Obtain delegation tokens early to support external file system usage in HA services

2023-03-31 Thread via GitHub


gaborgsomogyi commented on code in PR #22298:
URL: https://github.com/apache/flink/pull/22298#discussion_r1154481057


##
flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java:
##
@@ -418,6 +418,21 @@ public void start() throws Exception {
 
ClusterEntrypointUtils.getPoolSize(configuration),
 new ExecutorThreadFactory("mini-cluster-io"));
 
+delegationTokenManager =
+DefaultDelegationTokenManagerFactory.create(
+configuration,
+miniClusterConfiguration.getPluginManager(),
+commonRpcService.getScheduledExecutor(),
+ioExecutor);
+// Obtaining delegation tokens and propagating them to the 
local JVM receivers in a
+// one-time fashion is required because BlobServer may connect 
to external file
+// systems
+delegationTokenManager.obtainDelegationTokens();

Review Comment:
   Added that we obtain tokens in `ClusterEntrypointTest` but it would be 
overkill to check that token obtain happens before HA services.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mxm merged pull request #556: [FLINK-30575] Set processing capacity to infinite if task is idle

2023-03-31 Thread via GitHub


mxm merged PR #556:
URL: https://github.com/apache/flink-kubernetes-operator/pull/556


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31468) Allow setting JobResourceRequirements through DispatcherGateway

2023-03-31 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-31468.

Fix Version/s: 1.18.0
   Resolution: Fixed

master: 3072e176ad4a894ef56c51003dd9baef976e600a

> Allow setting JobResourceRequirements through DispatcherGateway
> ---
>
> Key: FLINK-31468
> URL: https://issues.apache.org/jira/browse/FLINK-31468
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: David Morávek
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] hackergin commented on pull request #22310: [hotfix][doc] Fix the incorrect description for TO_TIMESTAMP function

2023-03-31 Thread via GitHub


hackergin commented on PR #22310:
URL: https://github.com/apache/flink/pull/22310#issuecomment-1491884369

   @MartijnVisser  hi,  do you think this modification is necessary? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31672) Requirement validation does not take user-specified or scheduler-generated maxParallelism into account

2023-03-31 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-31672.

Resolution: Fixed

master: 7e16a0fa61177a51cb50d0f198464939310508a0

> Requirement validation does not take user-specified or scheduler-generated 
> maxParallelism into account
> --
>
> Key: FLINK-31672
> URL: https://issues.apache.org/jira/browse/FLINK-31672
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #22296: [FLINK-31468] Add mechanics to the Dispatcher for handling job resource requirements

2023-03-31 Thread via GitHub


zentol merged PR #22296:
URL: https://github.com/apache/flink/pull/22296


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31623) Change to uniform sampling in DataStreamUtils#sample method

2023-03-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31623:
---
Labels: pull-request-available  (was: )

> Change to uniform sampling in DataStreamUtils#sample method
> ---
>
> Key: FLINK-31623
> URL: https://issues.apache.org/jira/browse/FLINK-31623
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Machine Learning
>Reporter: Fan Hong
>Priority: Major
>  Labels: pull-request-available
>
> Current implementation employs two-level sampling method.
> However, when data instances are imbalanced distributed among partitions 
> (subtasks), the probabilities of instances to be sampled are different in 
> different partitions (subtasks), i.e., not a uniform sampling.
>  
> In addition, one side-effect of current implementation is: one subtask has a 
> memory footprint of `2 * numSamples * sizeof(element)`, which could cause 
> unexpected OOM in some situations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] zhipeng93 commented on a diff in pull request #227: [FLINK-31623] Fix DataStreamUtils#sample to uniform sampling.

2023-03-31 Thread via GitHub


zhipeng93 commented on code in PR #227:
URL: https://github.com/apache/flink-ml/pull/227#discussion_r1154426414


##
flink-ml-core/src/main/java/org/apache/flink/ml/common/datastream/DataStreamUtils.java:
##
@@ -280,6 +280,15 @@ public static  DataStream aggregate(
  * This method takes samples without replacement. If the number of 
elements in the stream is
  * smaller than expected number of samples, all elements will be included 
in the sample.
  *
+ * Technical details about this method: Firstly, the input elements are 
rebalanced. Then, in

Review Comment:
   nit: How about we update the java doc as follows and move the technique 
details to the implementation part or remove it?
   
   `Performs an approximate uniform sampling over the elements in a bounded 
data stream. The difference of probablity of two data points been sampled is 
bounded by O(numSamples * p * p / (M * M)), where p is the paralellism of the 
input stream, M is the total number of data points that the input stream 
contains.`
   
   This method takes samples without replacement. If the number of elements in 
the stream is smaller than expected number of samples, all elements will be 
included in the sample.



##
flink-ml-core/src/main/java/org/apache/flink/ml/common/datastream/DataStreamUtils.java:
##
@@ -280,6 +280,15 @@ public static  DataStream aggregate(
  * This method takes samples without replacement. If the number of 
elements in the stream is
  * smaller than expected number of samples, all elements will be included 
in the sample.
  *
+ * Technical details about this method: Firstly, the input elements are 
rebalanced. Then, in

Review Comment:
   nit: How about we update the java doc as follows and move the technique 
details to the implementation part or remove it?
   
   `Performs an approximate uniform sampling over the elements in a bounded 
data stream. The difference of probablity of two data points been sampled is 
bounded by O(numSamples * p * p / (M * M)), where p is the paralellism of the 
input stream, M is the total number of data points that the input stream 
contains.`
   
   `This method takes samples without replacement. If the number of elements in 
the stream is smaller than expected number of samples, all elements will be 
included in the sample.`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31680) Add priorityClassName to flink-operator's pods

2023-03-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31680:
---
Labels: pull-request-available  (was: )

> Add priorityClassName to flink-operator's pods
> --
>
> Key: FLINK-31680
> URL: https://issues.apache.org/jira/browse/FLINK-31680
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Rafał Boniecki
>Priority: Major
>  Labels: pull-request-available
>
> I can't set pod priorityClassName for flink operator's pods using helm chart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] boniek83 opened a new pull request, #559: [FLINK-31680] Add priorityClassName to flink-operator's pods

2023-03-31 Thread via GitHub


boniek83 opened a new pull request, #559:
URL: https://github.com/apache/flink-kubernetes-operator/pull/559

   ## What is the purpose of the change
   
   This pr adds priorityClassName to operator pod.
   
   ## Brief change log
   
   PR is simple and self-descriptive.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] libenchao commented on a diff in pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-03-31 Thread via GitHub


libenchao commented on code in PR #22289:
URL: https://github.com/apache/flink/pull/22289#discussion_r1154400863


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkConnection.java:
##
@@ -18,34 +18,64 @@
 
 package org.apache.flink.table.jdbc;
 
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+import org.apache.flink.table.gateway.service.context.DefaultContext;
+import org.apache.flink.table.jdbc.utils.DriverUtils;
+
 import java.sql.DatabaseMetaData;
 import java.sql.SQLClientInfoException;
 import java.sql.SQLException;
 import java.sql.SQLFeatureNotSupportedException;
 import java.sql.Statement;
+import java.util.Collections;
 import java.util.Properties;
+import java.util.UUID;
 
 /** Connection to flink sql gateway for jdbc driver. */
 public class FlinkConnection extends BaseConnection {
-private final DriverUri driverUri;
+private final Executor executor;
+private volatile boolean closed = false;
 
 public FlinkConnection(DriverUri driverUri) {
-this.driverUri = driverUri;
+this.executor =
+Executor.create(
+new DefaultContext(
+Configuration.fromMap(
+
DriverUtils.fromProperties(driverUri.getProperties())),
+Collections.emptyList()),
+driverUri.getAddress(),
+UUID.randomUUID().toString());
+driverUri.getCatalog().ifPresent(this::setSessionCatalog);
+driverUri.getDatabase().ifPresent(this::setSessionSchema);
 }
 
 @Override
 public Statement createStatement() throws SQLException {
 throw new SQLFeatureNotSupportedException();
 }
 
+Executor getExecutor() {

Review Comment:
   Add a `VisibleForTesting` annotation?



##
flink-table/flink-sql-jdbc-driver/src/test/java/org/apache/flink/table/jdbc/FlinkConnectionTest.java:
##
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.jdbc;
+
+import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.SingleSessionManager;
+import org.apache.flink.table.client.gateway.StatementResult;
+import 
org.apache.flink.table.gateway.rest.util.SqlGatewayRestEndpointExtension;
+import org.apache.flink.table.gateway.service.utils.SqlGatewayServiceExtension;
+import org.apache.flink.test.junit5.MiniClusterExtension;
+
+import org.junit.jupiter.api.Order;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.RegisterExtension;
+
+import java.sql.SQLException;
+import java.util.Properties;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertThrowsExactly;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+
+/** Tests for {@link FlinkConnection}. */
+public class FlinkConnectionTest {
+@RegisterExtension
+@Order(1)
+private static final MiniClusterExtension MINI_CLUSTER_RESOURCE =
+new MiniClusterExtension(
+new MiniClusterResourceConfiguration.Builder()
+.setNumberTaskManagers(1)
+.setNumberSlotsPerTaskManager(4)
+.build());
+
+@RegisterExtension
+@Order(2)
+public static final SqlGatewayServiceExtension 
SQL_GATEWAY_SERVICE_EXTENSION =
+new SqlGatewayServiceExtension(
+MINI_CLUSTER_RESOURCE::getClientConfiguration, 
SingleSessionManager::new);
+
+@RegisterExtension
+@Order(3)
+private static final SqlGatewayRestEndpointExtension 
SQL_GATEWAY_REST_ENDPOINT_EXTENSION =
+new 
SqlGatewayRestEndpointExtension(SQL_GATEWAY_SERVICE_EXTENSION::getService);
+
+@Test
+public void testCatalogSchema() throws Exception {
+DriverUri driverUri =
+DriverUri.create(
+

[GitHub] [flink-connector-aws] dannycranmer merged pull request #63: [hotfix][ci] Update Flink versions in CI builds

2023-03-31 Thread via GitHub


dannycranmer merged PR #63:
URL: https://github.com/apache/flink-connector-aws/pull/63


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] boring-cyborg[bot] commented on pull request #34: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #34:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/34#issuecomment-1491841620

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-cassandra] boring-cyborg[bot] commented on pull request #4: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #4:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/4#issuecomment-1491841503

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-cassandra] dannycranmer merged pull request #4: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer merged PR #4:
URL: https://github.com/apache/flink-connector-cassandra/pull/4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-rabbitmq] boring-cyborg[bot] commented on pull request #6: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #6:
URL: 
https://github.com/apache/flink-connector-rabbitmq/pull/6#issuecomment-1491841279

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] dannycranmer merged pull request #34: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer merged PR #34:
URL: https://github.com/apache/flink-connector-jdbc/pull/34


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-rabbitmq] dannycranmer merged pull request #6: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer merged PR #6:
URL: https://github.com/apache/flink-connector-rabbitmq/pull/6


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] boring-cyborg[bot] commented on pull request #13: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #13:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/13#issuecomment-1491841203

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] dannycranmer merged pull request #54: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer merged PR #54:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/54


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] dannycranmer merged pull request #13: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer merged PR #13:
URL: https://github.com/apache/flink-connector-opensearch/pull/13


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] boring-cyborg[bot] commented on pull request #54: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #54:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/54#issuecomment-1491841143

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zzzzzzzs commented on a diff in pull request #20342: [FLINK-26939][Table SQL/API] Add TRANSLATE supported in SQL & Table API

2023-03-31 Thread via GitHub


zzzs commented on code in PR #20342:
URL: https://github.com/apache/flink/pull/20342#discussion_r1154393863


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/SqlFunctionUtils.java:
##
@@ -371,6 +371,11 @@ public static String repeat(String str, int repeat) {
 return EncodingUtils.repeat(str, repeat);
 }
 
+/** Returns an expr where all characters in from have been replaced with 
those in to. */
+public static String translate3(String str, String search, String 
replacement) {
+return org.apache.commons.lang3.StringUtils.replaceChars(str, search, 
replacement);

Review Comment:
   StringUtils#replaceChars replaces a set of characters with a single 
character, while SqlFunctionUtils#replace replaces a substring with another 
string.
   example:
   ``` java
   String replaced = StringUtils.replaceChars("AaBbCc", "abc", "123");
   System.out.println(replaced); // "A1B2C3"
   String replaced = SqlFunctionUtils.replace("AaBbCc", "abc", "123");
   System.out.println(replaced1); // "AaBbCc"
   ```
   To meet the functionality requirements of Hive or Spark, it is necessary to 
use StringUtils#replaceChars. This is also the approach taken in Calcite.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gyfora commented on a diff in pull request #22298: [FLINK-31656][runtime][security] Obtain delegation tokens early to support external file system usage in HA services

2023-03-31 Thread via GitHub


gyfora commented on code in PR #22298:
URL: https://github.com/apache/flink/pull/22298#discussion_r1154386632


##
flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java:
##
@@ -418,6 +418,21 @@ public void start() throws Exception {
 
ClusterEntrypointUtils.getPoolSize(configuration),
 new ExecutorThreadFactory("mini-cluster-io"));
 
+delegationTokenManager =
+DefaultDelegationTokenManagerFactory.create(
+configuration,
+miniClusterConfiguration.getPluginManager(),
+commonRpcService.getScheduledExecutor(),
+ioExecutor);
+// Obtaining delegation tokens and propagating them to the 
local JVM receivers in a
+// one-time fashion is required because BlobServer may connect 
to external file
+// systems
+delegationTokenManager.obtainDelegationTokens();

Review Comment:
   I don't seem to find any test for this new behaviour, would be good to add 
something to guard against accidental regressions in the future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on a diff in pull request #21873: [FLINK-30921][ci] Adds mirrors instead of relying on a single source for Ubuntu packages

2023-03-31 Thread via GitHub


MartijnVisser commented on code in PR #21873:
URL: https://github.com/apache/flink/pull/21873#discussion_r1154369959


##
tools/azure-pipelines/e2e-template.yml:
##
@@ -98,6 +98,19 @@ jobs:
 
 echo "Free up disk space"
 ./tools/azure-pipelines/free_disk_space.sh
+
+# the APT mirrors access is based on a proposal from 
https://github.com/actions/runner-images/issues/7048#issuecomment-1419426054
+echo "Configure APT mirrors"
+mirror_file_path="/etc/apt/mirrors.txt"
+default_ubuntu_mirror_url="http://azure.archive.ubuntu.com/ubuntu/;
+
+# add Azure's Ubuntu mirror as a top-priority source
+echo -e "${default_ubuntu_mirror_url}\tpriority:1" | sudo tee 
${mirror_file_path}
+
+# use other mirrors as a fallback option
+curl http://mirrors.ubuntu.com/mirrors.txt | sudo tee --append 
${mirror_file_path}

Review Comment:
   The only thing this could introduce is option for a supply chain attack, but 
we already have others sources where we do something similar and I would 
classify the risk that Ubuntu gets compromised the same as we do for the other 
sources (Maven/Github). 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


zentol commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1491780342

   You may also be able to work around this issue by working against 
https://github.com/apache/flink-connector-kafka instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] boring-cyborg[bot] commented on pull request #34: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #34:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/34#issuecomment-1491778630

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-mongodb] zentol merged pull request #4: [hotfix] Fix unstable test of MongoSinkITCase.testRecovery

2023-03-31 Thread via GitHub


zentol merged PR #4:
URL: https://github.com/apache/flink-connector-mongodb/pull/4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-cassandra] boring-cyborg[bot] commented on pull request #4: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #4:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/4#issuecomment-1491772224

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-cassandra] dannycranmer opened a new pull request, #4: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer opened a new pull request, #4:
URL: https://github.com/apache/flink-connector-cassandra/pull/4

   As per apache/flink-connector-mongodb@a4a3250
   
   Disable dependency convergence for nightly builds to unblock Flink 1.17 
build verification


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-rabbitmq] dannycranmer opened a new pull request, #6: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer opened a new pull request, #6:
URL: https://github.com/apache/flink-connector-rabbitmq/pull/6

   As per apache/flink-connector-mongodb@a4a3250
   
   Disable dependency convergence for nightly builds to unblock Flink 1.17 
build verification


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-rabbitmq] boring-cyborg[bot] commented on pull request #6: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #6:
URL: 
https://github.com/apache/flink-connector-rabbitmq/pull/6#issuecomment-1491769209

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-31 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-31621:
-

Assignee: jackylau

> Add ARRAY_REVERSE supported in SQL & Table API
> --
>
> Key: FLINK-31621
> URL: https://issues.apache.org/jira/browse/FLINK-31621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_reverse(array) - Returns an array in reverse order.
> Syntax:
> array_reverse(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns an array in reverse order.
> Returns null if the argument is null
> {code:sql}
> > SELECT array_reverse(array(1, 2, 2, NULL));
>  NULL, 2, 2, 1{code}
> See also
> bigquery 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-31 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang closed FLINK-31621.
-
Resolution: Implemented

Merged into master: 122429f99b9b75737cd218a56a37a7ced750582f

> Add ARRAY_REVERSE supported in SQL & Table API
> --
>
> Key: FLINK-31621
> URL: https://issues.apache.org/jira/browse/FLINK-31621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_reverse(array) - Returns an array in reverse order.
> Syntax:
> array_reverse(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns an array in reverse order.
> Returns null if the argument is null
> {code:sql}
> > SELECT array_reverse(array(1, 2, 2, NULL));
>  NULL, 2, 2, 1{code}
> See also
> bigquery 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-elasticsearch] boring-cyborg[bot] commented on pull request #54: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #54:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/54#issuecomment-1491760352

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] dannycranmer opened a new pull request, #54: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer opened a new pull request, #54:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/54

   As per 
https://github.com/apache/flink-connector-mongodb/commit/a4a3250423592a9bff8859f3132fb72840d0f2b5
   
   Disable dependency convergence for nightly builds to unblock Flink 1.17 
build verification


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fsk119 merged pull request #22277: [FLINK-31621][table] Add built-in ARRAY_REVERSE function.

2023-03-31 Thread via GitHub


fsk119 merged PR #22277:
URL: https://github.com/apache/flink/pull/22277


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] z3d1k opened a new pull request, #63: [hotfix][ci] Update Flink versions in CI builds

2023-03-31 Thread via GitHub


z3d1k opened a new pull request, #63:
URL: https://github.com/apache/flink-connector-aws/pull/63

   
   
   ## Purpose of the change
   
   - Update CI workflows to include Flink 1.17.0 in PR build and 1.18-SNAPSHOT 
in nightly builds
   - Use archive.apache.org for downloading binaries in line with other 
connectors
   
   ## Verifying this change
   
   This change was verified by build on commit push.
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] boring-cyborg[bot] commented on pull request #13: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


boring-cyborg[bot] commented on PR #13:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/13#issuecomment-1491751467

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] dannycranmer opened a new pull request, #13: [hotfix] Disable nighty dependency convergence

2023-03-31 Thread via GitHub


dannycranmer opened a new pull request, #13:
URL: https://github.com/apache/flink-connector-opensearch/pull/13

   As per 
https://github.com/apache/flink-connector-mongodb/commit/a4a3250423592a9bff8859f3132fb72840d0f2b5
   
   Disable dependency convergence for nightly builds to unblock Flink 1.17 
build verification 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22277: [FLINK-31621][table] Add built-in ARRAY_REVERSE function.

2023-03-31 Thread via GitHub


liuyongvs commented on PR #22277:
URL: https://github.com/apache/flink/pull/22277#issuecomment-1491725693

   @snuyanzin @fsk119 will it be merged?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22312: [FLINK-31677][table] Add built-in MAP_ENTRIES function.

2023-03-31 Thread via GitHub


liuyongvs commented on PR #22312:
URL: https://github.com/apache/flink/pull/22312#issuecomment-1491722243

   hi @snuyanzin do you have time to review it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20342: [FLINK-26939][Table SQL/API] Add TRANSLATE supported in SQL & Table API

2023-03-31 Thread via GitHub


snuyanzin commented on code in PR #20342:
URL: https://github.com/apache/flink/pull/20342#discussion_r1154293981


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/SqlFunctionUtils.java:
##
@@ -371,6 +371,11 @@ public static String repeat(String str, int repeat) {
 return EncodingUtils.repeat(str, repeat);
 }
 
+/** Returns an expr where all characters in from have been replaced with 
those in to. */
+public static String translate3(String str, String search, String 
replacement) {
+return org.apache.commons.lang3.StringUtils.replaceChars(str, search, 
replacement);

Review Comment:
   I didn't say to use `StringUtils#replace` I asked about usage of `replace` 
from this class `SqlFunctionUtils#replace`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (FLINK-26692) migrate TpcdsTestProgram.java to new source

2023-03-31 Thread Zhu Zhu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26692 ]


Zhu Zhu deleted comment on FLINK-26692:
-

was (Author: flink-jira-bot):
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> migrate TpcdsTestProgram.java to new source
> ---
>
> Key: FLINK-26692
> URL: https://issues.apache.org/jira/browse/FLINK-26692
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: zhouli
>Assignee: zhouli
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0
>
>
> [run-nightly-tests.sh#L220|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/run-nightly-tests.sh#L220]
>  run TpcdsTestProgram which uses the legacy source with 
> AdaptiveBatchScheduler, since there are some known issues (FLINK-26576 , 
> FLINK-26548 )about legacy source, I think we should migrate TpcdsTestProgram 
> to new source asap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-26692) migrate TpcdsTestProgram.java to new source

2023-03-31 Thread Zhu Zhu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26692 ]


Zhu Zhu deleted comment on FLINK-26692:
-

was (Author: flink-jira-bot):
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> migrate TpcdsTestProgram.java to new source
> ---
>
> Key: FLINK-26692
> URL: https://issues.apache.org/jira/browse/FLINK-26692
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: zhouli
>Assignee: zhouli
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0
>
>
> [run-nightly-tests.sh#L220|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/run-nightly-tests.sh#L220]
>  run TpcdsTestProgram which uses the legacy source with 
> AdaptiveBatchScheduler, since there are some known issues (FLINK-26576 , 
> FLINK-26548 )about legacy source, I think we should migrate TpcdsTestProgram 
> to new source asap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


zentol commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1491680270

   I'd also suggest to look into 
https://issues.apache.org/jira/browse/FLINK-31660?focusedCommentId=17707214=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17707214
 because that _may_ be a cleaner solution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


zentol commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1491678202

   Mind you, that merging this to release-1.17 is a completely different story.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mbalassi commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-03-31 Thread via GitHub


mbalassi commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1154290969


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -627,14 +637,42 @@ public Map getClusterInfo(Configuration 
conf) throws Exception {
 .toSeconds(),
 TimeUnit.SECONDS);
 
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_VERSION,
 dashboardConfiguration.getFlinkVersion());
-runtimeVersion.put(
+clusterInfo.put(
 DashboardConfiguration.FIELD_NAME_FLINK_REVISION,
 dashboardConfiguration.getFlinkRevision());
 }
-return runtimeVersion;
+
+// JobManager resource usage can be deduced from the CR
+var jmParameters =
+new KubernetesJobManagerParameters(
+conf, new 
KubernetesClusterClientFactory().getClusterSpecification(conf));
+var jmTotalCpu =
+jmParameters.getJobManagerCPU()
+* jmParameters.getJobManagerCPULimitFactor()
+* jmParameters.getReplicas();
+var jmTotalMemory =
+Math.round(
+jmParameters.getJobManagerMemoryMB()
+* Math.pow(1024, 2)
+* jmParameters.getJobManagerMemoryLimitFactor()
+* jmParameters.getReplicas());
+
+// TaskManager resource usage is best gathered from the REST API to 
get current replicas

Review Comment:
   There is a limit factor for TaskManager cores that Flink allows to be 
configured on top of the resources defined on the Kubernestes level, similarly 
to have I calculated the JobManager resources. I setup an example to validate 
your suggestion where I have one JM and TM each, with 0.5 cpus configured in 
the resources field each. The cpu limit factors are 1.0. We end up with 1.5 
cpus (0.5 for the JM accurately reported and 1.0 for the TM).
   
   ```
 jobManager:
   replicas: 1
   resource:
 cpu: 0.5
 memory: 2048m
 serviceAccount: flink
 taskManager:
   resource:
 cpu: 0.5
 memory: 2048m
   status:
 clusterInfo:
   flink-revision: DeadD0d0 @ 1970-01-01T01:00:00+01:00
   flink-version: 1.16.1
   tm-cpu-limit-factor: "1.0"
   jm-cpu-limit-factor: "1.0"
   total-cpu: "1.5"
   total-memory: "4294967296"
 jobManagerDeploymentStatus: READY
   ```
   
   It is a bit of a tough problem, because the Flink UI also shows 1 core for 
the TM (using the same value that we get from the REST API).
   
   https://user-images.githubusercontent.com/5990983/229091963-f5e9a985-2ebe-4518-9623-6a4d4da9ad3c.png;>
   
   So ultimately we have to decide whether to stick with Flink or with 
Kubernetes, I am leaning towards the latter (with calculating in the limit 
factor, but avoiding the rounding).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


zentol commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1491677580

   > But is there something wrong with fixing this issue for the time being 
here?
   
   It breaks the code freeze that we decided on the ML. If you want to merge it 
regardless, then please bring it up on the mailing list first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] Fanoid commented on a diff in pull request #221: [FLINK-31189] Add HasMaxIndexNum param to StringIndexer

2023-03-31 Thread via GitHub


Fanoid commented on code in PR #221:
URL: https://github.com/apache/flink-ml/pull/221#discussion_r1154268534


##
docs/content/docs/operators/feature/stringindexer.md:
##
@@ -59,9 +59,10 @@ Below are the parameters required by `StringIndexerModel`.
 
 `StringIndexer` needs parameters above and also below.
 
-| Key | Default   | Type   | Required | Description

 |
-|-|---||--|-|
-| stringOrderType | `"arbitrary"` | String | no   | How to order strings 
of each column. Supported values: 'arbitrary', 'frequencyDesc', 'frequencyAsc', 
'alphabetDesc', 'alphabetAsc'. |
+| Key | Default   | Type| Required | Description   

  |
+|-|---|-|--|-|
+| stringOrderType | `"arbitrary"` | String  | no   | How to order strings 
of each column. Supported values: 'arbitrary', 'frequencyDesc', 'frequencyAsc', 
'alphabetDesc', 'alphabetAsc'. |
+| MaxIndexNum | `2147483647`  | Integer | no   | The max number of 
indices for each column. It only works when stringOrderType is set as 
frequencyDesc.  |

Review Comment:
   nit: add quotes for 'stringOrderType' and 'frequencyDesc', same as other 
places.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31635) Support writing records to the new tiered store architecture

2023-03-31 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan reassigned FLINK-31635:
-

Assignee: Yuxin Tan

> Support writing records to the new tiered store architecture
> 
>
> Key: FLINK-31635
> URL: https://issues.apache.org/jira/browse/FLINK-31635
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31638) Downstream supports reading buffers from tiered store

2023-03-31 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan reassigned FLINK-31638:
-

Assignee: Wencong Liu

> Downstream supports reading buffers from tiered store
> -
>
> Key: FLINK-31638
> URL: https://issues.apache.org/jira/browse/FLINK-31638
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pnowojski commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-03-31 Thread via GitHub


pnowojski commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1491654230

   Is there something wrong with fixing this issue for the time being here? 
It's after all causing some problems. When the Kafka connector is going to be 
removed? What if it's going to be delayed?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22316: [FLINK-31638][runtime] Downstream supports reading buffers from tiered store

2023-03-31 Thread via GitHub


flinkbot commented on PR #22316:
URL: https://github.com/apache/flink/pull/22316#issuecomment-1491651580

   
   ## CI report:
   
   * 09cfbcc61a465986d80a1bfd2f6892ed81004fb3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22315: [FLINK-31683][docs-zh] Update the outdated Chinese filesystem connector docs

2023-03-31 Thread via GitHub


flinkbot commented on PR #22315:
URL: https://github.com/apache/flink/pull/22315#issuecomment-1491644661

   
   ## CI report:
   
   * f4e654e8ea2284100420dfef0e3b27fe59d90aa1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31626) HsSubpartitionFileReaderImpl should recycle skipped read buffers.

2023-03-31 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-31626.
--
Fix Version/s: 1.18.0
   1.17.1
   Resolution: Fixed

master(1.18) via 59462197ba725b9fc0118ec54ec9f1325b8a874a.
release-1.17 via 0f59c8f7b161e96f026529f542c00b1094107371

> HsSubpartitionFileReaderImpl should recycle skipped read buffers.
> -
>
> Key: FLINK-31626
> URL: https://issues.apache.org/jira/browse/FLINK-31626
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.1
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.1
>
>
> In FLINK-30189, {{HsSubpartitionFileReaderImpl}} will skip the buffer has 
> been consumed from memory to avoid double-consumption. But these buffers were 
> not returned to the {{BatchShuffleReadBufferPool}}, resulting in read buffer 
> leaks. In addition, all loaded buffers should also be recycled after data 
> view released.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31626) HsSubpartitionFileReaderImpl should recycle skipped read buffers.

2023-03-31 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707221#comment-17707221
 ] 

Weijie Guo edited comment on FLINK-31626 at 3/31/23 9:42 AM:
-

master(1.18) via 59462197ba725b9fc0118ec54ec9f1325b8a874a.
release-1.17 via 0f59c8f7b161e96f026529f542c00b1094107371.


was (Author: weijie guo):
master(1.18) via 59462197ba725b9fc0118ec54ec9f1325b8a874a.
release-1.17 via 0f59c8f7b161e96f026529f542c00b1094107371

> HsSubpartitionFileReaderImpl should recycle skipped read buffers.
> -
>
> Key: FLINK-31626
> URL: https://issues.apache.org/jira/browse/FLINK-31626
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.1
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.1
>
>
> In FLINK-30189, {{HsSubpartitionFileReaderImpl}} will skip the buffer has 
> been consumed from memory to avoid double-consumption. But these buffers were 
> not returned to the {{BatchShuffleReadBufferPool}}, resulting in read buffer 
> leaks. In addition, all loaded buffers should also be recycled after data 
> view released.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31638) Downstream supports reading buffers from tiered store

2023-03-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31638:
---
Labels: pull-request-available  (was: )

> Downstream supports reading buffers from tiered store
> -
>
> Key: FLINK-31638
> URL: https://issues.apache.org/jira/browse/FLINK-31638
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >