[jira] [Closed] (FLINK-33471) Kubernetes operator supports compiling with Java 21

2023-11-09 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33471.
--
Resolution: Fixed

merged to main ff4c730e0612a44fa9fc2eda09e1fe6bb7054145

> Kubernetes operator supports compiling with Java 21
> ---
>
> Key: FLINK-33471
> URL: https://issues.apache.org/jira/browse/FLINK-33471
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Since there is a new Java LTS version available (21) it would make sense to 
> support it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33471] Make flink kubernetes operator compilable with jdk21 [flink-kubernetes-operator]

2023-11-09 Thread via GitHub


gyfora merged PR #701:
URL: https://github.com/apache/flink-kubernetes-operator/pull/701


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-32380) Support Java records

2023-11-09 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-32380.
--
Fix Version/s: 1.19.0
   Resolution: Fixed

> Support Java records
> 
>
> Key: FLINK-32380
> URL: https://issues.apache.org/jira/browse/FLINK-32380
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Chesnay Schepler
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Reportedly Java records are not supported, because they are neither detected 
> by our Pojo serializer nor supported by Kryo 2.x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32380) Support Java records

2023-11-09 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784706#comment-17784706
 ] 

Gyula Fora commented on FLINK-32380:


Merged to master  ba752b9e8b3fa0fbbe67d6d1bd70cccbc74e6ca0

> Support Java records
> 
>
> Key: FLINK-32380
> URL: https://issues.apache.org/jira/browse/FLINK-32380
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Chesnay Schepler
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Reportedly Java records are not supported, because they are neither detected 
> by our Pojo serializer nor supported by Kryo 2.x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


gyfora merged PR #23490:
URL: https://github.com/apache/flink/pull/23490


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33512) Update download link in doc of Kafka connector

2023-11-09 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren closed FLINK-33512.
-
Resolution: Duplicate

> Update download link in doc of Kafka connector
> --
>
> Key: FLINK-33512
> URL: https://issues.apache.org/jira/browse/FLINK-33512
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: kafka-3.0.1
>Reporter: Qingsheng Ren
>Priority: Major
>
> Currently the download link of Kafka connector in documentations points to a 
> non-existed version `1.18.0`:
> DataStream API: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]
> Table API Kafka: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/kafka/]
> Table API Upsert Kafka: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/upsert-kafka/]
> The latest version should be 3.0.1-1.17 and 3.0.1-1.18.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33032) [JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-33032:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)
> ---
>
> Key: FLINK-33032
> URL: https://issues.apache.org/jira/browse/FLINK-33032
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
>
> [JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32997) [JUnit5 Migration] Module: flink-table-planner (StreamingTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-32997:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Module: flink-table-planner (StreamingTestBase)
> --
>
> Key: FLINK-32997
> URL: https://issues.apache.org/jira/browse/FLINK-32997
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> JUnit5 Migration Module: flink-table-planner (StreamingTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33024) [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-33024:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)
> -
>
> Key: FLINK-33024
> URL: https://issues.apache.org/jira/browse/FLINK-33024
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>
> [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33031) [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-33031:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)
> 
>
> Key: FLINK-33031
> URL: https://issues.apache.org/jira/browse/FLINK-33031
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
>
> [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33023) [JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-33023.

Resolution: Fixed

Resolved in master: cdb759b0ecda97bb04912553c7453710a07d499d

> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)
> --
>
> Key: FLINK-33023
> URL: https://issues.apache.org/jira/browse/FLINK-33023
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33023) [JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-33023:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)
> --
>
> Key: FLINK-33023
> URL: https://issues.apache.org/jira/browse/FLINK-33023
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33023][table-planner][JUnit5 Migration] Module: flink-table-planner (TableTestBase) [flink]

2023-11-09 Thread via GitHub


leonardBang merged PR #23349:
URL: https://github.com/apache/flink/pull/23349


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33023) [JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-33023:
---
Fix Version/s: 1.19.0

> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)
> --
>
> Key: FLINK-33023
> URL: https://issues.apache.org/jira/browse/FLINK-33023
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> [JUnit5 Migration] Module: flink-table-planner (TableTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33512) Update download link in doc of Kafka connector

2023-11-09 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-33512:
-

 Summary: Update download link in doc of Kafka connector
 Key: FLINK-33512
 URL: https://issues.apache.org/jira/browse/FLINK-33512
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Documentation
Affects Versions: kafka-3.0.1
Reporter: Qingsheng Ren


Currently the download link of Kafka connector in documentations points to a 
non-existed version `1.18.0`:

DataStream API: 
[https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]

Table API Kafka: 
[https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/kafka/]

Table API Upsert Kafka: 
[https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/upsert-kafka/]

The latest version should be 3.0.1-1.17 and 3.0.1-1.18.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25400) RocksDBStateBackend configurations does not work with SavepointEnvironment

2023-11-09 Thread Jinzhong Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784691#comment-17784691
 ] 

Jinzhong Li commented on FLINK-25400:
-

I think this issue still exits in 1.19. 
The task configuration of SavepointEnvironment should be passed to 
SavepointTaskManagerRuntimeInfo,
so that SavepointEnvironment.getTaskManagerInfo().getConfiguration() can obtain 
the task configuration.

If my analysis is correct, I want to fix this bug.

> RocksDBStateBackend configurations does not work with SavepointEnvironment
> --
>
> Key: FLINK-25400
> URL: https://issues.apache.org/jira/browse/FLINK-25400
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.12.2
>Reporter: wuzhiyu
>Priority: Major
>
> Hi~
> I'm trying to use flink-state-processor-api to do state migrations by reading 
> states from an existing savepoint, and writing them into a new savepoint 
> after certain transformations.
> However, the reading rate does not  meet my expectation.
> When I tried to tune RocksDB by enabling RocksDB native metrics, I found it 
> did not work.
> So I did some debug, I found when the job is running under a 
> SavepointEnvironment, no RocksDBStatebackend configurations will be passed to 
> RocksDBStateBackend.
> The whole process is described as below (code demonstrated is under version 
> release-1.12.2):
> First, when 
> org.apache.flink.streaming.runtime.tasks.StreamTask#createStateBackend is 
> invoked:
>  
> {code:java}
> // org.apache.flink.streaming.runtime.tasks.StreamTask#createStateBackend
> private StateBackend createStateBackend() throws Exception {
> final StateBackend fromApplication =
> configuration.getStateBackend(getUserCodeClassLoader());
> return StateBackendLoader.fromApplicationOrConfigOrDefault(
> fromApplication,
> getEnvironment().getTaskManagerInfo().getConfiguration(),
> getUserCodeClassLoader(),
> LOG); {code}
> *getEnvironment()* returns a SavepointEnvironment instance.
>  
> And then 
> *org.apache.flink.state.api.runtime.SavepointEnvironment#getTaskManagerInfo* 
> is invoked, it returns a new 
> *org.apache.flink.state.api.runtime.SavepointTaskManagerRuntimeInfo* instance.
>  
> {code:java}
> // org.apache.flink.state.api.runtime.SavepointEnvironment#getTaskManagerInfo
> @Override
> public TaskManagerRuntimeInfo getTaskManagerInfo() {
> return new SavepointTaskManagerRuntimeInfo(getIOManager());
> } {code}
>  
> At last, 
> *org.apache.flink.state.api.runtime.SavepointTaskManagerRuntimeInfo#getConfiguration*
>  is invoked. It returns an empty configuration, which means all 
> configurations will be lost.
> {code:java}
> // 
> org.apache.flink.state.api.runtime.SavepointTaskManagerRuntimeInfo#getConfiguration
> @Override
> public Configuration getConfiguration() {
> return new Configuration();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33295) Separate SinkV2 and SinkV1Adapter tests

2023-11-09 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784689#comment-17784689
 ] 

Peter Vary edited comment on FLINK-33295 at 11/10/23 6:23 AM:
--

_InternalSinkWriterMetricGroup_ is an initial class, so in theory connectors 
should not use it.
 * How much effort would it be to enable the annotation check for the 
connectors?
 * We can expose the _MetricsGroupTestUtils_ in a test jar, if we see that the 
connectors would like use it for testing.

Thanks for the heads-up!

 

Peter


was (Author: pvary):
`InternalSinkWriterMetricGroup` is an initial class, so in theory connectors 
should not use it.
 * How much effort would it be to enable the annotation check for the 
connectors?
 * We can expose the `MetricsGroupTestUtils` in a test jar, if we see that the 
connectors would like use it for testing.

Thanks for the heads-up!

 

Peter

> Separate SinkV2 and SinkV1Adapter tests
> ---
>
> Key: FLINK-33295
> URL: https://issues.apache.org/jira/browse/FLINK-33295
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Connectors / Common
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Current SinkV2 tests are based on the sink generated by the 
> _org.apache.flink.streaming.runtime.operators.sink.TestSink_ test class. This 
> test class does not generate the SinkV2 directly, but generates a SinkV1 and 
> wraps in with a 
> _org.apache.flink.streaming.api.transformations.SinkV1Adapter._ While this 
> tests the SinkV2, but only as it is aligned with SinkV1, and the 
> SinkV1Adapter.
> We should have tests where we create a SinkV2 directly and the functionality 
> is tested without the adapter.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33295) Separate SinkV2 and SinkV1Adapter tests

2023-11-09 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784689#comment-17784689
 ] 

Peter Vary commented on FLINK-33295:


`InternalSinkWriterMetricGroup` is an initial class, so in theory connectors 
should not use it.
 * How much effort would it be to enable the annotation check for the 
connectors?
 * We can expose the `MetricsGroupTestUtils` in a test jar, if we see that the 
connectors would like use it for testing.

Thanks for the heads-up!

 

Peter

> Separate SinkV2 and SinkV1Adapter tests
> ---
>
> Key: FLINK-33295
> URL: https://issues.apache.org/jira/browse/FLINK-33295
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Connectors / Common
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Current SinkV2 tests are based on the sink generated by the 
> _org.apache.flink.streaming.runtime.operators.sink.TestSink_ test class. This 
> test class does not generate the SinkV2 directly, but generates a SinkV1 and 
> wraps in with a 
> _org.apache.flink.streaming.api.transformations.SinkV1Adapter._ While this 
> tests the SinkV2, but only as it is aligned with SinkV1, and the 
> SinkV1Adapter.
> We should have tests where we create a SinkV2 directly and the functionality 
> is tested without the adapter.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33511) flink SqlGateway select bigint type column get cast exception

2023-11-09 Thread xiaodao (Jira)
xiaodao created FLINK-33511:
---

 Summary: flink SqlGateway select bigint type column get cast 
exception
 Key: FLINK-33511
 URL: https://issues.apache.org/jira/browse/FLINK-33511
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Gateway
Affects Versions: 1.18.0
Reporter: xiaodao


when i open a beeline client connect to flink sqlgateway;

i create table like
{code:java}
//代码占位符
CREATE TABLE Orders (
order_number BIGINT,
priceDECIMAL(32,2),
buyerROW,
order_time   TIMESTAMP(3)
) WITH (
  'connector' = 'datagen'
) {code}
and then select * from Orders;

i got exception:

java.lang.Long cannot be cast to org.apache.flink.table.data.StringData



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30431][Connector/JDBC] JDBC Connector fails to reestablish the lost DB connect [flink-connector-jdbc]

2023-11-09 Thread via GitHub


Jiabao-Sun closed pull request #9: [FLINK-30431][Connector/JDBC] JDBC Connector 
fails to reestablish the lost DB connect
URL: https://github.com/apache/flink-connector-jdbc/pull/9


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-33493.
--

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.1.0
>
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo resolved FLINK-33493.

Resolution: Fixed

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.1.0
>
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-09 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1388936955


##
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DefaultResourceAllocationStrategy.java:
##
@@ -95,7 +96,7 @@ public DefaultResourceAllocationStrategy(
 SlotManagerUtils.generateDefaultSlotResourceProfile(
 totalResourceProfile, numSlotsPerWorker);
 this.availableResourceMatchingStrategy =
-evenlySpreadOutSlots
+taskManagerLoadBalanceMode == TaskManagerLoadBalanceMode.SLOTS

Review Comment:
   I'm lean to introduce `TASKS` after the functionality is complete.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784683#comment-17784683
 ] 

Weijie Guo commented on FLINK-33493:


main via 161b615.

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.1.0
>
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-09 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1388936955


##
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DefaultResourceAllocationStrategy.java:
##
@@ -95,7 +96,7 @@ public DefaultResourceAllocationStrategy(
 SlotManagerUtils.generateDefaultSlotResourceProfile(
 totalResourceProfile, numSlotsPerWorker);
 this.availableResourceMatchingStrategy =
-evenlySpreadOutSlots
+taskManagerLoadBalanceMode == TaskManagerLoadBalanceMode.SLOTS

Review Comment:
   I lean to introduce `TASKS` after the functionality is complete.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-33493:
---
Fix Version/s: elasticsearch-3.1.0

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.1.0
>
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-33493:
--

Assignee: Yuxin Tan

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33493][connectors/elasticsearch] Fix ElasticsearchWriterITCase test [flink-connector-elasticsearch]

2023-11-09 Thread via GitHub


reswqa merged PR #81:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/81


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33510) Update plugin for SBOM generation to 2.7.10

2023-11-09 Thread Vinod Anandan (Jira)
Vinod Anandan created FLINK-33510:
-

 Summary: Update plugin for SBOM generation to 2.7.10
 Key: FLINK-33510
 URL: https://issues.apache.org/jira/browse/FLINK-33510
 Project: Flink
  Issue Type: Improvement
Reporter: Vinod Anandan


Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33509) flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java

2023-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33509:
---
Labels: pull-request-available  (was: )

> flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java
> --
>
> Key: FLINK-33509
> URL: https://issues.apache.org/jira/browse/FLINK-33509
> Project: Flink
>  Issue Type: Bug
> Environment: Java 11
>Reporter: Ruby
>Priority: Major
>  Labels: pull-request-available
>
> When applying Nondex to the test, the NodeSelectorRequirement object shows 
> nondeterminism. When testing, we assume that requirement would be equal to 
> expected_requirement, both of them are the instance of 
> NodeSelectorRequirement object. The NodeSelectorRequirement object has three 
> attributes, including key, operator, and values ​​list.  It is possible to 
> get values list's elements in order `[blockedNode1, blockedNode2]`, while the 
> expected result is `[blockedNode2, blockedNode1]` which is incorrect. 
>  
> The root cause appeared in line 56 of `KubernetesTaskManagerTestBase.java`. 
> (flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/KubernetesTaskManagerTestBase.java)
>  Here we define `BLOCKED_NODES` as a new `hashSet`. In 
> `InitTaskManagerDecoratorTest.java`, when initializing the 
> `expected_requirement` in the test, the values ​​being passed was 
> this`BLOCKED_NODES`, which is an **unordered Set**. Later, the code convert 
> this **hashSet** into **arrayList**, which led to the unstable result of the 
> values list.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33509] Fix flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java [flink]

2023-11-09 Thread via GitHub


flinkbot commented on PR #23694:
URL: https://github.com/apache/flink/pull/23694#issuecomment-1805064194

   
   ## CI report:
   
   * 67a3adf8ee55b17d2aaf963c4520cdb5597d9c11 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Fix flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java [flink]

2023-11-09 Thread via GitHub


yijut2 opened a new pull request, #23694:
URL: https://github.com/apache/flink/pull/23694

   ## What is the purpose of the change
   This pull request fix the flaky-test testNodeAffinity() in 
InitTaskManagerDecoratorTest.java. 
(`flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/decorators/InitTaskManagerDecoratorTest.java`)
   
   ## Brief change log
   
   - **Test Failure**
   
   When applying Nondex to the test, the NodeSelectorRequirement object shows 
nondeterminism. When testing, we assume that requirement would be equal to 
expected_requirement, both of them are the instance of NodeSelectorRequirement 
object. The NodeSelectorRequirement object has three attributes, including key, 
operator, and values ​​list.  It is possible to get values list's elements in 
order `[blockedNode1, blockedNode2]`, while the expected result is 
`[blockedNode2, blockedNode1]` which is incorrect. 
   
   - **Root Cause**
   
   The root cause appeared in line 56 of `KubernetesTaskManagerTestBase.java`. 
(flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/KubernetesTaskManagerTestBase.java)
 Here we define `BLOCKED_NODES` as a new `hashSet`. In 
`InitTaskManagerDecoratorTest.java`, when initializing the 
`expected_requirement` in the test, the values ​​being passed was 
this`BLOCKED_NODES`, which is an **unordered Set**. Later, the code convert 
this **hashSet** into **arrayList**, which led to the unstable result of the 
values list.
   
   - **Command to reproduce the issue**
   
   `mvn -pl flink-kubernetes edu.illinois:nondex-maven-plugin:2.1.1:nondex 
-Dtest=org.apache.flink.kubernetes.kubeclient.decorators.InitTaskManagerDecoratorTest#testNodeAffinity`
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is to fix the test, it changes the test itself. This change can 
be verified as follows:
 - *Running the nondex for 100 times and passed all of them (NonDex is a 
tool for detecting and debugging wrong assumptions in Java)*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33102) Document the standalone autoscaler

2023-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33102:
---
Labels: pull-request-available  (was: )

> Document the standalone autoscaler
> --
>
> Key: FLINK-33102
> URL: https://issues.apache.org/jira/browse/FLINK-33102
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33102][autoscaler] Document the autoscaler standalone and Extensibility of Autoscaler [flink-kubernetes-operator]

2023-11-09 Thread via GitHub


1996fanrui opened a new pull request, #703:
URL: https://github.com/apache/flink-kubernetes-operator/pull/703

   ## Brief change log
   
   [FLINK-33102][autoscaler] Document the autoscaler standalone and 
Extensibility of Autoscaler
   
   The following is the documentation on My Mac.
   
   https://github.com/apache/flink-kubernetes-operator/assets/38427477/771a2a26-64b6-4d12-a320-7ba22caf9ef2;>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33509) flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java

2023-11-09 Thread Ruby (Jira)
Ruby created FLINK-33509:


 Summary: flaky test testNodeAffinity() in 
InitTaskManagerDecoratorTest.java
 Key: FLINK-33509
 URL: https://issues.apache.org/jira/browse/FLINK-33509
 Project: Flink
  Issue Type: Bug
 Environment: Java 11
Reporter: Ruby


When applying Nondex to the test, the NodeSelectorRequirement object shows 
nondeterminism. When testing, we assume that requirement would be equal to 
expected_requirement, both of them are the instance of NodeSelectorRequirement 
object. The NodeSelectorRequirement object has three attributes, including key, 
operator, and values ​​list.  It is possible to get values list's elements in 
order `[blockedNode1, blockedNode2]`, while the expected result is 
`[blockedNode2, blockedNode1]` which is incorrect. 

 

The root cause appeared in line 56 of `KubernetesTaskManagerTestBase.java`. 
(flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/KubernetesTaskManagerTestBase.java)
 Here we define `BLOCKED_NODES` as a new `hashSet`. In 
`InitTaskManagerDecoratorTest.java`, when initializing the 
`expected_requirement` in the test, the values ​​being passed was 
this`BLOCKED_NODES`, which is an **unordered Set**. Later, the code convert 
this **hashSet** into **arrayList**, which led to the unstable result of the 
values list.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread Yuxin Tan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784674#comment-17784674
 ] 

Yuxin Tan commented on FLINK-33493:
---

[~martijnvisser] Yeah. I will take a look at this issue.

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33508) Support for wildcard paths in Flink History Server for multi cluster environment

2023-11-09 Thread Jayadeep Jayaraman (Jira)
Jayadeep Jayaraman created FLINK-33508:
--

 Summary: Support for wildcard paths in Flink History Server for 
multi cluster environment
 Key: FLINK-33508
 URL: https://issues.apache.org/jira/browse/FLINK-33508
 Project: Flink
  Issue Type: Improvement
Reporter: Jayadeep Jayaraman


In Cloud users typically create multiple clusters which are ephemeral and want 
a single history server to look at historical jobs.

To implement this history server needs to support wildcard paths and this 
change is to support such wildcard paths



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33492][docs] Fix unavailable links in connector download page [flink]

2023-11-09 Thread via GitHub


liyubin117 commented on PR #23693:
URL: https://github.com/apache/flink/pull/23693#issuecomment-1805018421

   @MartijnVisser validation passed in our dev environment, PTAL, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33492][docs] Fix unavailable links in connector download page [flink]

2023-11-09 Thread via GitHub


flinkbot commented on PR #23693:
URL: https://github.com/apache/flink/pull/23693#issuecomment-1805016619

   
   ## CI report:
   
   * e01fe4f7a94e35e3f095499a169c747814ea19dd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33507) JsonToRowDataConverters can't parse zero timestamp '0000-00-00 00:00:00'

2023-11-09 Thread jinzhuguang (Jira)
jinzhuguang created FLINK-33507:
---

 Summary: JsonToRowDataConverters can't parse zero timestamp  
'-00-00 00:00:00'
 Key: FLINK-33507
 URL: https://issues.apache.org/jira/browse/FLINK-33507
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.16.0
 Environment: Flink 1.16.0
Reporter: jinzhuguang


When I use Flink CDC to synchronize data from MySQL, Kafka is used to store 
data in JSON format. But when I read data from Kafka, I found that the 
Timestamp type data "-00-00 00:00:00" in MySQL could not be parsed by 
Flink, and the error was reported as follows:

Caused by: 
org.apache.flink.formats.json.JsonToRowDataConverters$JsonParseException: Fail 
to deserialize at field: data.
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createRowConverter$ef66fe9a$1(JsonToRowDataConverters.java:354)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$wrapIntoNullableConverter$de0b9253$1(JsonToRowDataConverters.java:380)
    at 
org.apache.flink.formats.json.JsonRowDataDeserializationSchema.convertToRowData(JsonRowDataDeserializationSchema.java:131)
    at 
org.apache.flink.formats.json.canal.CanalJsonDeserializationSchema.deserialize(CanalJsonDeserializationSchema.java:234)
    ... 17 more
Caused by: 
org.apache.flink.formats.json.JsonToRowDataConverters$JsonParseException: Fail 
to deserialize at field: update_time.
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createRowConverter$ef66fe9a$1(JsonToRowDataConverters.java:354)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$wrapIntoNullableConverter$de0b9253$1(JsonToRowDataConverters.java:380)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createArrayConverter$94141d67$1(JsonToRowDataConverters.java:304)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$wrapIntoNullableConverter$de0b9253$1(JsonToRowDataConverters.java:380)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.convertField(JsonToRowDataConverters.java:370)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createRowConverter$ef66fe9a$1(JsonToRowDataConverters.java:350)
    ... 20 more
Caused by: java.time.format.DateTimeParseException: Text '-00-00 00:00:00' 
could not be parsed: Invalid value for MonthOfYear (valid values 1 - 12): 0
    at 
java.time.format.DateTimeFormatter.createError(DateTimeFormatter.java:1920)
    at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1781)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.convertToTimestamp(JsonToRowDataConverters.java:224)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$wrapIntoNullableConverter$de0b9253$1(JsonToRowDataConverters.java:380)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.convertField(JsonToRowDataConverters.java:370)
    at 
org.apache.flink.formats.json.JsonToRowDataConverters.lambda$createRowConverter$ef66fe9a$1(JsonToRowDataConverters.java:350)
    ... 25 more
Caused by: java.time.DateTimeException: Invalid value for MonthOfYear (valid 
values 1 - 12): 0
    at java.time.temporal.ValueRange.checkValidIntValue(ValueRange.java:330)
    at java.time.temporal.ChronoField.checkValidIntValue(ChronoField.java:722)
    at java.time.chrono.IsoChronology.resolveYMD(IsoChronology.java:550)
    at java.time.chrono.IsoChronology.resolveYMD(IsoChronology.java:123)
    at 
java.time.chrono.AbstractChronology.resolveDate(AbstractChronology.java:472)
    at java.time.chrono.IsoChronology.resolveDate(IsoChronology.java:492)
    at java.time.chrono.IsoChronology.resolveDate(IsoChronology.java:123)
    at java.time.format.Parsed.resolveDateFields(Parsed.java:351)
    at java.time.format.Parsed.resolveFields(Parsed.java:257)
    at java.time.format.Parsed.resolve(Parsed.java:244)
    at 
java.time.format.DateTimeParseContext.toResolved(DateTimeParseContext.java:331)
    at 
java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1955)
    at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1777)
    ... 29 more

Usually MySQL allows the server and client to parse this type of data and treat 
it as NULL, so I think Flink should also support it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33492][docs] Fix unavailable links in connector download page [flink]

2023-11-09 Thread via GitHub


liyubin117 opened a new pull request, #23693:
URL: https://github.com/apache/flink/pull/23693

   ## What is the purpose of the change
   
   Fix unavailable links in connector download page
   
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/downloads/
   
   ## Brief change log
   
   * correct hbase, kafka, upsert-kafka connector download url
   
   ## Verifying this change
   
   Docs only change.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33492) Fix unavailable links in connector download page

2023-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33492:
---
Labels: pull-request-available  (was: )

> Fix unavailable links in connector download page
> 
>
> Key: FLINK-33492
> URL: https://issues.apache.org/jira/browse/FLINK-33492
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> there are several unavailable connector download links (hbase, kafka, etc)
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/downloads/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33493) Elasticsearch connector ElasticsearchWriterITCase test failed

2023-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33493:
---
Labels: pull-request-available  (was: )

> Elasticsearch connector ElasticsearchWriterITCase test failed
> -
>
> Key: FLINK-33493
> URL: https://issues.apache.org/jira/browse/FLINK-33493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>
> When I ran tests, the test failed. The failed reason is
> {code:java}
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[197,46]
>  cannot find symbol
>   symbol:   method 
> mock(org.apache.flink.metrics.MetricGroup,org.apache.flink.metrics.groups.OperatorIOMetricGroup)
>   location: class 
> org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchWriterITCase.java:[273,46]
>  cannot find symbol
>   symbol:   method mock(org.apache.flink.metrics.MetricGroup)
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6809899863/job/18517273714?pr=77#step:13:134.
> ElasticsearchWriterITCase called Flink "InternalSinkWriterMetricGroup#mock", 
> and it is renamed in https://github.com/apache/flink/pull/23541 
> ([FLINK-33295|https://issues.apache.org/jira/browse/FLINK-33295] in Flink 
> 1.19). So the test failed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33493][connectors/elasticsearch] Fix ElasticsearchWriterITCase test [flink-connector-elasticsearch]

2023-11-09 Thread via GitHub


TanYuxin-tyx opened a new pull request, #81:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/81

   This is a fix for https://issues.apache.org/jira/browse/FLINK-33493. 
   
   ElasticsearchWriterITCase called Flink `InternalSinkWriterMetricGroup#mock`, 
and it is renamed in https://github.com/apache/flink/pull/23541 
([FLINK-33295](https://issues.apache.org/jira/browse/FLINK-33295) in Flink 
1.19). So the test failed.
   
   This fixes it by creating a `TestingSinkWriterMetricGroup`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26694][table] Support lookup join via a multi-level inheritance of TableFunction [flink]

2023-11-09 Thread via GitHub


YesOrNo828 commented on PR #23684:
URL: https://github.com/apache/flink/pull/23684#issuecomment-1804987153

   @leonardBang  @lincoln-lil  Could you review this patch when you have time? 
Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32993][table] Datagen connector handles fixed-length data types according to the original definition by default [flink]

2023-11-09 Thread via GitHub


LadyForest commented on PR #23678:
URL: https://github.com/apache/flink/pull/23678#issuecomment-1804985921

   > @LadyForest Hi, we just need to modify few codes to implement the feature, 
now CI has succeed, Looking forward your review, thanks!
   
   Hi @liyubin117 thanks for reaching out! I'll take a look as soon as possible.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26694][table] Support lookup join via a multi-level inheritance of TableFunction [flink]

2023-11-09 Thread via GitHub


YesOrNo828 commented on PR #23684:
URL: https://github.com/apache/flink/pull/23684#issuecomment-1804981056

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33059] Support transparent compression for file-connector for all file input formats [flink]

2023-11-09 Thread via GitHub


ruslandanilin commented on PR #23443:
URL: https://github.com/apache/flink/pull/23443#issuecomment-1804803934

   Thank you @echauchot !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33471] Make flink kubernetes operator compilable with jdk21 [flink-kubernetes-operator]

2023-11-09 Thread via GitHub


mxm commented on code in PR #701:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/701#discussion_r1388611990


##
.github/workflows/ci.yml:
##
@@ -26,7 +26,7 @@ jobs:
 name: test_ci
 strategy:
   matrix:
-java-version: [ 11, 17 ]
+java-version: [ 11, 17, 21 ]

Review Comment:
   Oh! I missed that you were excluding. This is fine then.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33506] Add support for jdk17 for AWS connectors [flink-connector-aws]

2023-11-09 Thread via GitHub


snuyanzin opened a new pull request, #115:
URL: https://github.com/apache/flink-connector-aws/pull/115

   ## Purpose of the change
   
   The PR adds support for jdk17, it also adds java 17 to ci build matrix
   
   ## Verifying this change
   maven
   
   This change is already covered by existing tests + java 17 added to ci
   
   
   ## Significant changes
   
   - [X] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33506) Make AWS connectors compilable with jdk17

2023-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33506:
---
Labels: pull-request-available  (was: )

> Make AWS connectors compilable with jdk17
> -
>
> Key: FLINK-33506
> URL: https://issues.apache.org/jira/browse/FLINK-33506
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / AWS
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Since 1.18 Flink with jdk 17 support is released it would make sense to add 
> such support for connectors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33506) Make AWS connectors compilable with jdk17

2023-11-09 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33506:
---

 Summary: Make AWS connectors compilable with jdk17
 Key: FLINK-33506
 URL: https://issues.apache.org/jira/browse/FLINK-33506
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / AWS
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


Since 1.18 Flink with jdk 17 support is released it would make sense to add 
such support for connectors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33402] Hybrid Source Concurrency Race Condition Fixes and Related Bugs Results in Data Loss [flink]

2023-11-09 Thread via GitHub


varun1729DD commented on PR #23687:
URL: https://github.com/apache/flink/pull/23687#issuecomment-1804678547

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


XComp commented on code in PR #23490:
URL: https://github.com/apache/flink/pull/23490#discussion_r1388577908


##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   Ok, now I see where I was wrong: I was too distracted by the redundant code 
that I didn't see that the `IllegalAccessException` is thrown by the 
`Field#set` method which is not present in the first code block. Sorry for 
bringing this up over and over again :facepalm: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


gyfora commented on PR #23490:
URL: https://github.com/apache/flink/pull/23490#issuecomment-1804596187

   @XComp i would like to merge this unless you have any further functional 
comments. 
   
   Since you are working on refactoring this, you can try removing some try 
catch block if you feel that way, but I could not do it easily.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


gyfora commented on code in PR #23490:
URL: https://github.com/apache/flink/pull/23490#discussion_r1388529025


##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   You can try removing it but I think you will find that it’s needed 



##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   You can try removing it but I think you will find that it’s needed 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


gyfora commented on code in PR #23490:
URL: https://github.com/apache/flink/pull/23490#discussion_r1388527803


##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   I only add the try catch block where the exception is declared as thrown 
(where it’s necessary) it’s not thrown in this code hence no try catch block . 
I would remove it if I could :)



##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   I only add the try catch block where the exception is declared as thrown 
(where it’s necessary) it’s not thrown in this code hence no try catch block . 
I would remove it if I could :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33505) switch away from using netty 3 based Pekko Classic Remoting

2023-11-09 Thread PJ Fanning (Jira)
PJ Fanning created FLINK-33505:
--

 Summary: switch away from using netty 3 based Pekko Classic 
Remoting
 Key: FLINK-33505
 URL: https://issues.apache.org/jira/browse/FLINK-33505
 Project: Flink
  Issue Type: Improvement
Reporter: PJ Fanning


It is my understanding that Flink uses the Netty 3 based Pekko Classic Remoting.

Netty 3 has a lot of security issues.

It will be months before Pekko 1.1.0 is released but that switches Classic 
Remoting to use Netty 4.

Akka and Pekko actually recommend that users switch to using Artery based 
communications.

Even if you wait for Pekko 1.1.0, the new Netty 4 based classic remoting will 
need to be tested.

There is also the option of dropping Pekko - FLINK-29281

If you don't want to try Artery and don't want to wait for Pekko 1.1.0, you 
might be able to copy over 5 classes that add Netty 4 support and update your 
application.conf. This would be approximately 
https://github.com/apache/incubator-pekko/pull/778. There is a bit more work to 
do in terms of debugging the test failure and it seems that this change is 
unlikely to be merged back to the Pekko 1.0.x line.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33378] Prepare actions for flink version 1.18 [flink-connector-jdbc]

2023-11-09 Thread via GitHub


maver1ck commented on PR #76:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/76#issuecomment-1804449467

   @MartijnVisser any chance to merge this ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33471] Make flink kubernetes operator compilable with jdk21 [flink-kubernetes-operator]

2023-11-09 Thread via GitHub


snuyanzin commented on code in PR #701:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/701#discussion_r1388456764


##
.github/workflows/ci.yml:
##
@@ -26,7 +26,7 @@ jobs:
 name: test_ci
 strategy:
   matrix:
-java-version: [ 11, 17 ]
+java-version: [ 11, 17, 21 ]

Review Comment:
   it does not run all
   it runs only against flink1.18 (same as for 17)
   below there are exclusions for 1.16 and 1.17
   
   I'm ok to reduce number of tests, however then would be great if you can 
provide a hint what else could be cut



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33471] Make flink kubernetes operator compilable with jdk21 [flink-kubernetes-operator]

2023-11-09 Thread via GitHub


mxm commented on code in PR #701:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/701#discussion_r1388431713


##
.github/workflows/ci.yml:
##
@@ -26,7 +26,7 @@ jobs:
 name: test_ci
 strategy:
   matrix:
-java-version: [ 11, 17 ]
+java-version: [ 11, 17, 21 ]

Review Comment:
   Is it worth to explode the test matrix by adding another Java version? Can't 
we test Java 21 compatibility without running all integration tests with it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33487) Add the new Snowflake connector to supported list

2023-11-09 Thread Mohsen Rezaei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784575#comment-17784575
 ] 

Mohsen Rezaei commented on FLINK-33487:
---

Hey [~martijnvisser] that makes sense, and thanks for the quick response and 
pointer.

> Add the new Snowflake connector to supported list
> -
>
> Key: FLINK-33487
> URL: https://issues.apache.org/jira/browse/FLINK-33487
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Mohsen Rezaei
>Priority: Major
>
> Code was contributed in FLINK-32737.
> Add this new connector to the list of supported ones in the documentation 
> with a corresponding sub-page for the details of the connector:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/overview/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Pull Request sonacloud setup [flink]

2023-11-09 Thread via GitHub


gboussida closed pull request #23692: Pull Request sonacloud setup
URL: https://github.com/apache/flink/pull/23692


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Pull Request sonacloud setup [flink]

2023-11-09 Thread via GitHub


gboussida opened a new pull request, #23692:
URL: https://github.com/apache/flink/pull/23692

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33419] Port PROCTIME/ROWTIME functions to the new inference stack [flink]

2023-11-09 Thread via GitHub


jnh5y commented on code in PR #23634:
URL: https://github.com/apache/flink/pull/23634#discussion_r1388314920


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/fieldExpression.scala:
##
@@ -103,71 +103,6 @@ case class TableReference(name: String, tableOperation: 
QueryOperation)
   override def toString: String = s"$name"
 }
 
-abstract class TimeAttribute(val expression: PlannerExpression)
-  extends UnaryExpression
-  with WindowProperty {
-
-  override private[flink] def child: PlannerExpression = expression
-}
-
-case class RowtimeAttribute(expr: PlannerExpression) extends 
TimeAttribute(expr) {
-
-  override private[flink] def validateInput(): ValidationResult = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isProctimeIndicatorType(tpe) =>
-ValidationFailure("A proctime window cannot provide a rowtime 
attribute.")
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-ValidationSuccess
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-ValidationSuccess
-  case WindowReference(_, _) =>
-ValidationFailure("Reference to a rowtime or proctime window 
required.")
-  case any =>
-ValidationFailure(
-  s"The '.rowtime' expression can only be used for table definitions 
and windows, " +
-s"while [$any] was found.")
-}
-  }
-
-  override def resultType: TypeInformation[_] = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-TimeIndicatorTypeInfo.ROWTIME_INDICATOR
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-Types.SQL_TIMESTAMP

Review Comment:
   Nice!  Glad I asked, and good job finding the corner/edge case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33504) Supported parallel jobs

2023-11-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-33504:
-

 Summary: Supported parallel jobs
 Key: FLINK-33504
 URL: https://issues.apache.org/jira/browse/FLINK-33504
 Project: Flink
  Issue Type: Sub-task
  Components: Build System / CI
Reporter: Matthias Pohl


{quote}
Up to 10 free Microsoft-hosted parallel jobs that can run for up to 360 minutes 
(6 hours) each time
{quote}
Azure CI allows up to 10 parallel jobs for public repos 
([source|https://learn.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops=ms-hosted]).

Looks like GHA allows up to 256 parallel jobs 
([source|https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs]):

{quote}
A matrix will generate a maximum of 256 jobs per workflow run.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33476) join table throw Record is too big Exception

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-33476.
-
Resolution: Won't Fix

[~zhuyinjun] Unfortunately, I don't have an answer to your question. It's not 
my field of expertise. Please use the [user mailing 
list|https://flink.apache.org/what-is-flink/community/#mailing-lists] for 
questions like that. It's more likely to get an answer there than to ping 
individuals. Jira is preferably used for actual bugs or improvements.

I'm closing this issue as it doesn't seem to cover an actual bug.

> join table throw Record is too big Exception
> 
>
> Key: FLINK-33476
> URL: https://issues.apache.org/jira/browse/FLINK-33476
> Project: Flink
>  Issue Type: New Feature
>Reporter: zhu
>Priority: Major
>
> When I join a table, Flink throws a record too big Exception. What parameters 
> should I configure to solve this problem
>  
> Caused by: java.io.IOException: Record is too big, it can't be added to a 
> empty InMemoryBuffer! Record size: 1163764, Buffer: 0
>     at 
> org.apache.flink.table.runtime.util.ResettableExternalBuffer.throwTooBigException(ResettableExternalBuffer.java:193)
>     at 
> org.apache.flink.table.runtime.util.ResettableExternalBuffer.add(ResettableExternalBuffer.java:149)
>     at 
> org.apache.flink.table.runtime.operators.join.SortMergeJoinIterator.bufferMatchingRows(SortMergeJoinIterator.java:118)
>     at 
> org.apache.flink.table.runtime.operators.join.SortMergeOneSideOuterJoinIterator.nextOuterJoin(SortMergeOneSideOuterJoinIterator.java:88)
>     at 
> org.apache.flink.table.runtime.operators.join.SortMergeJoinOperator.oneSideOuterJoin(SortMergeJoinOperator.java:413)
>     at 
> org.apache.flink.table.runtime.operators.join.SortMergeJoinOperator.doSortMergeJoin(SortMergeJoinOperator.java:298)
>     at 
> org.apache.flink.table.runtime.operators.join.SortMergeJoinOperator.endInput(SortMergeJoinOperator.java:248)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.endOperatorInput(TableOperatorWrapper.java:124)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.propagateEndOperatorInput(TableOperatorWrapper.java:136)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.endOperatorInput(TableOperatorWrapper.java:130)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.propagateEndOperatorInput(TableOperatorWrapper.java:136)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.endOperatorInput(TableOperatorWrapper.java:122)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.propagateEndOperatorInput(TableOperatorWrapper.java:136)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.TableOperatorWrapper.endOperatorInput(TableOperatorWrapper.java:122)
>     at 
> org.apache.flink.table.runtime.operators.multipleinput.BatchMultipleInputStreamOperator.endInput(BatchMultipleInputStreamOperator.java:56)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:98)
>     at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.endInput(RegularOperatorChain.java:97)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:68)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


XComp commented on code in PR #23490:
URL: https://github.com/apache/flink/pull/23490#discussion_r1388282048


##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -473,25 +521,40 @@ public T deserialize(T reuse, DataInputView source) 
throws IOException {
 }
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord()) {
 try {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();
 for (int i = 0; i < numFields; i++) {
 boolean isNull = source.readBoolean();
 
 if (fields[i] != null) {
 if (isNull) {
-fields[i].set(reuse, null);
+builder.setField(i, null);
 } else {
-Object field;
+builder.setField(i, deserializeField(null, i, 
source));

Review Comment:
   > If you pass a non-null reuse it's not null, but otherwise createInstance 
returns null for records
   
   true, I overlooked that `createInstance` actually returns `null` in certain 
situations.



##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -400,21 +444,23 @@ public T deserialize(DataInputView source) throws 
IOException {
 target = createInstance();
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord) {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordHelper.newBuilder();

Review Comment:
   This is not resolved in the code. I still struggle to understand why this is 
not necessary to be added here :thinking: 



##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -473,25 +521,41 @@ public T deserialize(T reuse, DataInputView source) 
throws IOException {
 }
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord()) {
 try {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordFactory.newBuilder();
 for (int i = 0; i < numFields; i++) {
 boolean isNull = source.readBoolean();
 
 if (fields[i] != null) {
 if (isNull) {
-fields[i].set(reuse, null);
+builder.setField(i, null);
 } else {
-Object field;
+Object reuseField = reuse == null ? null : 
fields[i].get(reuse);
+builder.setField(i, deserializeField(reuseField, 
i, source));
+}
+} else if (!isNull) {
+// read and dump a pre-existing field value
+fieldSerializers[i].deserialize(source);
+}
+}
 
-Object reuseField = fields[i].get(reuse);
-if (reuseField != null) {
-field = 
fieldSerializers[i].deserialize(reuseField, source);
-} else {
-field = 
fieldSerializers[i].deserialize(source);
-}
+reuse = builder.build();
+} catch (IllegalAccessException e) {
+throw new RuntimeException(
+"Error during POJO copy, this should not happen since 
we check the fields before.",
+e);
+}
+} else if ((flags & NO_SUBCLASS) != 0) {

Review Comment:
   test



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-11-09 Thread via GitHub


XComp commented on code in PR #23490:
URL: https://github.com/apache/flink/pull/23490#discussion_r1388279563


##
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializer.java:
##
@@ -473,25 +521,41 @@ public T deserialize(T reuse, DataInputView source) 
throws IOException {
 }
 }
 
-if ((flags & NO_SUBCLASS) != 0) {
+if (isRecord()) {
 try {
+JavaRecordBuilderFactory.JavaRecordBuilder builder = 
recordFactory.newBuilder();
 for (int i = 0; i < numFields; i++) {
 boolean isNull = source.readBoolean();
 
 if (fields[i] != null) {
 if (isNull) {
-fields[i].set(reuse, null);
+builder.setField(i, null);
 } else {
-Object field;
+Object reuseField = reuse == null ? null : 
fields[i].get(reuse);
+builder.setField(i, deserializeField(reuseField, 
i, source));
+}
+} else if (!isNull) {
+// read and dump a pre-existing field value
+fieldSerializers[i].deserialize(source);
+}
+}
 
-Object reuseField = fields[i].get(reuse);
-if (reuseField != null) {
-field = 
fieldSerializers[i].deserialize(reuseField, source);
-} else {
-field = 
fieldSerializers[i].deserialize(source);
-}
+reuse = builder.build();
+} catch (IllegalAccessException e) {
+throw new RuntimeException(
+"Error during POJO copy, this should not happen since 
we check the fields before.",
+e);
+}
+} else if ((flags & NO_SUBCLASS) != 0) {

Review Comment:
   test



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33418) SqlGatewayE2ECase failed due to ConnectException

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33418:
-

Assignee: Matthias Pohl

> SqlGatewayE2ECase failed due to ConnectException
> 
>
> Key: FLINK-33418
> URL: https://issues.apache.org/jira/browse/FLINK-33418
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client, Tests
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: github-actions, pull-request-available, test-stability
>
> The container couldn't be started in [this 
> build|https://github.com/XComp/flink/actions/runs/6696839844/job/18195926497#step:15:11765]:
> {code}
> Error: 20:18:40 20:18:40.111 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 110.789 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Error: 20:18:40 20:18:40.111 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase  Time elapsed: 110.789 s  
> <<< ERROR!
> Oct 30 20:18:40 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hdp2.6-hive:10
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Oct 30 20:18:40   at 
> org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Oct 30 20:18:40   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Oct 30 20:18:40 Caused by: org.rnorth.ducttape.RetryCountExceededException: 
> 

[jira] [Reopened] (FLINK-33418) SqlGatewayE2ECase failed due to ConnectException

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reopened FLINK-33418:
---

I'm re-opening the issue because the fix would mean two tests that could run in 
Docker.

> SqlGatewayE2ECase failed due to ConnectException
> 
>
> Key: FLINK-33418
> URL: https://issues.apache.org/jira/browse/FLINK-33418
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client, Tests
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: github-actions, pull-request-available, test-stability
>
> The container couldn't be started in [this 
> build|https://github.com/XComp/flink/actions/runs/6696839844/job/18195926497#step:15:11765]:
> {code}
> Error: 20:18:40 20:18:40.111 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 110.789 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Error: 20:18:40 20:18:40.111 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase  Time elapsed: 110.789 s  
> <<< ERROR!
> Oct 30 20:18:40 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hdp2.6-hive:10
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Oct 30 20:18:40   at 
> org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Oct 30 20:18:40   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Oct 30 20:18:40 Caused 

[jira] [Commented] (FLINK-33251) SQL Client query execution aborts after a few seconds: ConnectTimeoutException

2023-11-09 Thread Jorick Caberio (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784541#comment-17784541
 ] 

Jorick Caberio commented on FLINK-33251:


I was able to replicate this issue on my Mac Mini M2 running Flink 1.17.1

{code}
$ uname -a
Darwin Joricks-Mini.bbrouter 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun  8 
22:21:34 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T8112 arm64
{code}


{code:sql}
CREATE TABLE total_amount_table (
  `transaction_id` STRING,
  `transaction_datetime` STRING,
  `amount` DOUBLE
) WITH (
  'connector' = 'kafka',
  'topic' = 'total_amount_table',
  'properties.bootstrap.servers' = 'localhost:9092',
  'properties.group.id' = 'total_amount_table',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json'
);
{code}

{code:sql}
INSERT INTO total_amount_table VALUES ('txn1', '2023-11-09T16:10:25Z', 
125125.125);
{code}

{code:sql}
SELECT * FROM total_amount_table;
{code}


{code:sql}
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
connection timed out: localhost/127.0.0.1:50358
{code}


> SQL Client query execution aborts after a few seconds: ConnectTimeoutException
> --
>
> Key: FLINK-33251
> URL: https://issues.apache.org/jira/browse/FLINK-33251
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.18.0, 1.17.1
> Environment: Macbook Pro 
> Apple M1 Max
>  
> {code:java}
> $ uname -a
> Darwin asgard08 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 
> 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000 arm64
> {code}
> {code:bash}
> $ java --version
> openjdk 11.0.20.1 2023-08-24
> OpenJDK Runtime Environment Homebrew (build 11.0.20.1+0)
> OpenJDK 64-Bit Server VM Homebrew (build 11.0.20.1+0, mixed mode)
> $ mvn --version
> Apache Maven 3.9.5 (57804ffe001d7215b5e7bcb531cf83df38f93546)
> Maven home: /opt/homebrew/Cellar/maven/3.9.5/libexec
> Java version: 11.0.20.1, vendor: Homebrew, runtime: 
> /opt/homebrew/Cellar/openjdk@11/11.0.20.1/libexec/openjdk.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "14.0", arch: "aarch64", family: "mac"
> {code}
>Reporter: Robin Moffatt
>Priority: Major
> Attachments: log.zip
>
>
> If I run a streaming query from an unbounded connector from the SQL Client, 
> it bombs out after ~15 seconds.
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:52596
> {code}
> This *doesn't* happen on 1.16.2. It *does* happen on *1.17.1* and *1.18* that 
> I have just built locally (git repo hash `9b837727b6d`). 
> The corresponding task's status in the Web UI shows as `CANCELED`. 
> ---
> h2. To reproduce
> Launch local cluster and SQL client
> {code}
> ➜  flink-1.18-SNAPSHOT ./bin/start-cluster.sh 
> Starting cluster.
> Starting standalonesession daemon on host asgard08.
> Starting taskexecutor daemon on host asgard08.
> ➜  flink-1.18-SNAPSHOT ./bin/sql-client.sh
> […]
> Flink SQL>
> {code}
> Set streaming mode and result mode
> {code:sql}
> Flink SQL> SET 'execution.runtime-mode' = 'STREAMING';
> [INFO] Execute statement succeed.
> Flink SQL> SET 'sql-client.execution.result-mode' = 'changelog';
> [INFO] Execute statement succeed.
> {code}
> Define a table to read data from CSV files in a folder
> {code:sql}
> CREATE TABLE firewall (
>   event_time STRING,
>   source_ip  STRING,
>   dest_ipSTRING,
>   source_prt INT,
>   dest_prt   INT
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = 'file:///tmp/firewall/',
>   'format' = 'csv',
>   'source.monitor-interval' = '1' -- unclear from the docs what the unit is 
> here
> );
> {code}
> Create a CSV file to read in
> {code:bash}
> $ mkdir /tmp/firewall
> $ cat > /tmp/firewall/data.csv < 2018-05-11 00:19:34,151.35.34.162,125.26.20.222,2014,68
> 2018-05-11 22:20:43,114.24.126.190,21.68.21.69,379,1619
> EOF
> {code}
> Run a streaming query 
> {code}
> SELECT * FROM firewall;
> {code}
> You will get results showing (and if you add another data file it will show 
> up) - but after ~30 seconds the query aborts and throws an error back to the 
> user at the SQL Client prompt
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:58470
> Flink SQL>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33418][test,ci] Uses getHost() to access HiveContainer (instead of hard-coded IP) [flink]

2023-11-09 Thread via GitHub


XComp commented on PR #23649:
URL: https://github.com/apache/flink/pull/23649#issuecomment-1804169414

   In the end I realized that it was due to the e2e tests running in Docker (in 
contrast to AzureCI where we execute them in the VM). I'm keeping these open 
because it's still an improvement to the code.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-25538) [JUnit5 Migration] Module: flink-connector-kafka

2023-11-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784539#comment-17784539
 ] 

Matthias Pohl commented on FLINK-25538:
---

Ok, thanks for the offer. I re-assigned the task to you since [~ashmeet] 
mentioned in [his GitHub 
comment|https://github.com/apache/flink/pull/20991#issuecomment-1613909720] 
that he wouldn't be able to work on it anymore.

> [JUnit5 Migration] Module: flink-connector-kafka
> 
>
> Key: FLINK-25538
> URL: https://issues.apache.org/jira/browse/FLINK-25538
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Assignee: xiang1 yu
>Priority: Minor
>  Labels: pull-request-available, stale-assigned, starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33419] Port PROCTIME/ROWTIME functions to the new inference stack [flink]

2023-11-09 Thread via GitHub


dawidwys commented on code in PR #23634:
URL: https://github.com/apache/flink/pull/23634#discussion_r1388261066


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/fieldExpression.scala:
##
@@ -103,71 +103,6 @@ case class TableReference(name: String, tableOperation: 
QueryOperation)
   override def toString: String = s"$name"
 }
 
-abstract class TimeAttribute(val expression: PlannerExpression)
-  extends UnaryExpression
-  with WindowProperty {
-
-  override private[flink] def child: PlannerExpression = expression
-}
-
-case class RowtimeAttribute(expr: PlannerExpression) extends 
TimeAttribute(expr) {
-
-  override private[flink] def validateInput(): ValidationResult = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isProctimeIndicatorType(tpe) =>
-ValidationFailure("A proctime window cannot provide a rowtime 
attribute.")
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-ValidationSuccess
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-ValidationSuccess
-  case WindowReference(_, _) =>
-ValidationFailure("Reference to a rowtime or proctime window 
required.")
-  case any =>
-ValidationFailure(
-  s"The '.rowtime' expression can only be used for table definitions 
and windows, " +
-s"while [$any] was found.")
-}
-  }
-
-  override def resultType: TypeInformation[_] = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-TimeIndicatorTypeInfo.ROWTIME_INDICATOR
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-Types.SQL_TIMESTAMP

Review Comment:
   Actually we do need to update it for `PROCTIME`, because we always need to 
return `PROCTIME time indicator`.
   
   I thought having `timestampKind == TimestampKind.PROCTIME && 
!LogicalTypeChecks.isTimeAttribute(type)` for input is enough, but it passes 
also for `ROWTIME time indicator`.
   
   I'll update the output type strategy for `PROCTIME`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-25538) [JUnit5 Migration] Module: flink-connector-kafka

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-25538:
-

Assignee: xiang1 yu  (was: Ashmeet Kandhari)

> [JUnit5 Migration] Module: flink-connector-kafka
> 
>
> Key: FLINK-25538
> URL: https://issues.apache.org/jira/browse/FLINK-25538
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Assignee: xiang1 yu
>Priority: Minor
>  Labels: pull-request-available, stale-assigned, starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33419] Port PROCTIME/ROWTIME functions to the new inference stack [flink]

2023-11-09 Thread via GitHub


jnh5y commented on code in PR #23634:
URL: https://github.com/apache/flink/pull/23634#discussion_r1388236232


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/fieldExpression.scala:
##
@@ -103,71 +103,6 @@ case class TableReference(name: String, tableOperation: 
QueryOperation)
   override def toString: String = s"$name"
 }
 
-abstract class TimeAttribute(val expression: PlannerExpression)
-  extends UnaryExpression
-  with WindowProperty {
-
-  override private[flink] def child: PlannerExpression = expression
-}
-
-case class RowtimeAttribute(expr: PlannerExpression) extends 
TimeAttribute(expr) {
-
-  override private[flink] def validateInput(): ValidationResult = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isProctimeIndicatorType(tpe) =>
-ValidationFailure("A proctime window cannot provide a rowtime 
attribute.")
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-ValidationSuccess
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-ValidationSuccess
-  case WindowReference(_, _) =>
-ValidationFailure("Reference to a rowtime or proctime window 
required.")
-  case any =>
-ValidationFailure(
-  s"The '.rowtime' expression can only be used for table definitions 
and windows, " +
-s"while [$any] was found.")
-}
-  }
-
-  override def resultType: TypeInformation[_] = {
-child match {
-  case WindowReference(_, Some(tpe: TypeInformation[_])) if 
isRowtimeIndicatorType(tpe) =>
-// rowtime window
-TimeIndicatorTypeInfo.ROWTIME_INDICATOR
-  case WindowReference(_, Some(tpe)) if tpe == Types.LONG || tpe == 
Types.SQL_TIMESTAMP =>
-// batch time window
-Types.SQL_TIMESTAMP

Review Comment:
   Just to check my understanding, we do not need to update the output strategy 
for proctime since its type is constant, right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33503) Upgrade Maven wrapper to 3.2.0

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33503:
-

Assignee: Matthias Pohl

> Upgrade Maven wrapper to 3.2.0
> --
>
> Key: FLINK-33503
> URL: https://issues.apache.org/jira/browse/FLINK-33503
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>
> It downloads some binaries to execute
> in maven-wrapper 3.2.0 there was added checksum check, should we also 
> leverage this feature[1] before execution of downloaded binaries?
> [1] https://issues.apache.org/jira/browse/MWRAPPER-75



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33501) Rely on Maven wrapper instead of having custom Maven installation logic

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33501:
-

Assignee: Matthias Pohl

> Rely on Maven wrapper instead of having custom Maven installation logic
> ---
>
> Key: FLINK-33501
> URL: https://issues.apache.org/jira/browse/FLINK-33501
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> I noticed that we could use the Maven wrapper instead of having a custom 
> setup logic for Maven in CI.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33503) Upgrade Maven wrapper to 3.2.0

2023-11-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-33503:
-

 Summary: Upgrade Maven wrapper to 3.2.0
 Key: FLINK-33503
 URL: https://issues.apache.org/jira/browse/FLINK-33503
 Project: Flink
  Issue Type: Improvement
  Components: Build System / CI
Affects Versions: 1.17.1, 1.18.0, 1.19.0
Reporter: Matthias Pohl


It downloads some binaries to execute
in maven-wrapper 3.2.0 there was added checksum check, should we also leverage 
this feature[1] before execution of downloaded binaries?
[1] https://issues.apache.org/jira/browse/MWRAPPER-75



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33419] Port PROCTIME/ROWTIME functions to the new inference stack [flink]

2023-11-09 Thread via GitHub


dawidwys commented on code in PR #23634:
URL: https://github.com/apache/flink/pull/23634#discussion_r1388219626


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/WindowTimeIndictorInputTypeStrategy.java:
##
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.TimestampKind;
+import org.apache.flink.table.types.logical.utils.LogicalTypeChecks;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link InputTypeStrategy} for {@link BuiltInFunctionDefinitions#ROWTIME} 
and {@link
+ * BuiltInFunctionDefinitions#PROCTIME}.
+ */
+@Internal
+public final class WindowTimeIndictorInputTypeStrategy implements 
InputTypeStrategy {
+private final TimestampKind timestampKind;
+
+public WindowTimeIndictorInputTypeStrategy(TimestampKind timestampKind) {
+this.timestampKind = timestampKind;
+}
+
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.of(1);
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final LogicalType type = 
callContext.getArgumentDataTypes().get(0).getLogicalType();
+
+if (timestampKind == TimestampKind.PROCTIME && 
!LogicalTypeChecks.isTimeAttribute(type)) {
+return callContext.fail(
+throwOnFailure, "Reference to a rowtime or proctime window 
required.");
+}
+
+if (timestampKind == TimestampKind.ROWTIME && 
LogicalTypeChecks.isProctimeAttribute(type)) {
+return callContext.fail(
+throwOnFailure, "A proctime window cannot provide a 
rowtime attribute.");
+}
+
+if (!LogicalTypeChecks.canBeTimeAttributeType(type) && 
!type.is(LogicalTypeRoot.BIGINT)) {
+return callContext.fail(
+throwOnFailure, "Reference to a rowtime or proctime window 
required.");

Review Comment:
   added tests for that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33484) Flink Kafka Connector Offset Lag Issue with Transactional Data and Read Committed Isolation Level

2023-11-09 Thread Darcy Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784526#comment-17784526
 ] 

Darcy Lin commented on FLINK-33484:
---

[~martijnvisser] org.apache.flink:flink-connector-kafka:1.17.1

> Flink Kafka Connector Offset Lag Issue with Transactional Data and Read 
> Committed Isolation Level
> -
>
> Key: FLINK-33484
> URL: https://issues.apache.org/jira/browse/FLINK-33484
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
> Environment: Flink 1.17.1
> kafka 2.5.1
>Reporter: Darcy Lin
>Priority: Major
>
> We have encountered an issue with the Flink Kafka connector when consuming 
> transactional data from Kafka with the {{isolation.level}} set to 
> {{read_committed}} ({{{}setProperty("isolation.level", 
> "read_committed"){}}}). The problem is that even when all the data from a 
> topic is consumed, the offset lag is not 0, but 1. However, when using the 
> Kafka Java client to consume the same data, this issue does not occur.
> We suspect that this issue arises due to the way Flink Kafka connector 
> calculates the offset. The problem seems to be in the 
> {{KafkaRecordEmitter.java}} file, specifically in the {{emitRecord}} method. 
> When saving the offset, the method calls 
> {{{}splitState.setCurrentOffset(consumerRecord.offset() + 1);{}}}. While this 
> statement works correctly in a regular Kafka scenario, it might not be 
> accurate when the {{read_committed}} mode is used. We believe that it should 
> be {{{}splitState.setCurrentOffset(consumerRecord.offset() + 2);{}}}, as 
> transactional data in Kafka occupies an additional offset to store the 
> transaction marker.
> We request the Flink team to investigate this issue and provide us with 
> guidance on how to resolve it.
> Thank you for your attention and support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33488) Implement restore tests for Deduplicate node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33488:


Assignee: Jim Hughes

> Implement restore tests for Deduplicate node
> 
>
> Key: FLINK-33488
> URL: https://issues.apache.org/jira/browse/FLINK-33488
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33480) Implement restore tests for GroupAggregate node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33480:


Assignee: Bonnie Varghese

> Implement restore tests for GroupAggregate node
> ---
>
> Key: FLINK-33480
> URL: https://issues.apache.org/jira/browse/FLINK-33480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33470) Implement restore tests for Join node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33470:


Assignee: Jim Hughes

> Implement restore tests for Join node
> -
>
> Key: FLINK-33470
> URL: https://issues.apache.org/jira/browse/FLINK-33470
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33455) Implement restore tests for SortLimit node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33455:


Assignee: Bonnie Varghese

> Implement restore tests for SortLimit node
> --
>
> Key: FLINK-33455
> URL: https://issues.apache.org/jira/browse/FLINK-33455
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33441) Implement restore tests for ExecUnion node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33441:


Assignee: Bonnie Varghese

> Implement restore tests for ExecUnion node
> --
>
> Key: FLINK-33441
> URL: https://issues.apache.org/jira/browse/FLINK-33441
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33488] Implement restore tests for Deduplicate node [flink]

2023-11-09 Thread via GitHub


jnh5y commented on code in PR #23686:
URL: https://github.com/apache/flink/pull/23686#discussion_r1388205498


##
flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-deduplicate_1/deduplicate-asc-proctime/plan/deduplicate-asc-proctime.json:
##
@@ -0,0 +1,337 @@
+{
+  "flinkVersion" : "1.19",
+  "nodes" : [ {
+"id" : 8,
+"type" : "stream-exec-table-source-scan_1",
+"scanTableSource" : {
+  "table" : {
+"identifier" : "`default_catalog`.`default_database`.`MyTable`",
+"resolvedTable" : {
+  "schema" : {
+"columns" : [ {
+  "name" : "order_id",
+  "dataType" : "BIGINT"
+}, {
+  "name" : "user",
+  "dataType" : "VARCHAR(2147483647)"
+}, {
+  "name" : "product",
+  "dataType" : "VARCHAR(2147483647)"
+}, {
+  "name" : "order_time",
+  "dataType" : "BIGINT"
+}, {
+  "name" : "event_time",
+  "kind" : "COMPUTED",
+  "expression" : {
+"rexNode" : {
+  "kind" : "CALL",
+  "internalName" : "$TO_TIMESTAMP$1",
+  "operands" : [ {
+"kind" : "CALL",
+"internalName" : "$FROM_UNIXTIME$1",
+"operands" : [ {
+  "kind" : "INPUT_REF",
+  "inputIndex" : 3,
+  "type" : "BIGINT"
+} ],
+"type" : "VARCHAR(2147483647)"
+  } ],
+  "type" : "TIMESTAMP(3)"
+},
+"serializableString" : 
"TO_TIMESTAMP(FROM_UNIXTIME(`order_time`))"
+  }
+}, {
+  "name" : "proctime",
+  "kind" : "COMPUTED",
+  "expression" : {
+"rexNode" : {
+  "kind" : "CALL",
+  "internalName" : "$PROCTIME$1",
+  "operands" : [ ],
+  "type" : {
+"type" : "TIMESTAMP_WITH_LOCAL_TIME_ZONE",
+"nullable" : false,
+"precision" : 3,
+"kind" : "PROCTIME"
+  }
+},
+"serializableString" : "PROCTIME()"
+  }
+} ],
+"watermarkSpecs" : [ ]
+  },
+  "partitionKeys" : [ ]
+}
+  }
+},
+"outputType" : "ROW<`order_id` BIGINT, `user` VARCHAR(2147483647), 
`product` VARCHAR(2147483647), `order_time` BIGINT>",
+"description" : "TableSourceScan(table=[[default_catalog, 
default_database, MyTable]], fields=[order_id, user, product, order_time])",
+"inputProperties" : [ ]
+  }, {
+"id" : 9,
+"type" : "stream-exec-calc_1",
+"projection" : [ {
+  "kind" : "INPUT_REF",
+  "inputIndex" : 0,
+  "type" : "BIGINT"
+}, {
+  "kind" : "INPUT_REF",
+  "inputIndex" : 1,
+  "type" : "VARCHAR(2147483647)"
+}, {
+  "kind" : "INPUT_REF",
+  "inputIndex" : 2,
+  "type" : "VARCHAR(2147483647)"
+}, {
+  "kind" : "INPUT_REF",
+  "inputIndex" : 3,
+  "type" : "BIGINT"
+}, {
+  "kind" : "CALL",
+  "internalName" : "$PROCTIME$1",
+  "operands" : [ ],
+  "type" : {
+"type" : "TIMESTAMP_WITH_LOCAL_TIME_ZONE",
+"nullable" : false,
+"precision" : 3,
+"kind" : "PROCTIME"
+  }
+} ],
+"condition" : null,
+"inputProperties" : [ {
+  "requiredDistribution" : {
+"type" : "UNKNOWN"
+  },
+  "damBehavior" : "PIPELINED",
+  "priority" : 0
+} ],
+"outputType" : {
+  "type" : "ROW",
+  "fields" : [ {
+"name" : "order_id",
+"fieldType" : "BIGINT"
+  }, {
+"name" : "user",
+"fieldType" : "VARCHAR(2147483647)"
+  }, {
+"name" : "product",
+"fieldType" : "VARCHAR(2147483647)"
+  }, {
+"name" : "order_time",
+"fieldType" : "BIGINT"
+  }, {
+"name" : "proctime",
+"fieldType" : {
+  "type" : "TIMESTAMP_WITH_LOCAL_TIME_ZONE",
+  "nullable" : false,
+  "precision" : 3,
+  "kind" : "PROCTIME"
+}
+  } ]
+},
+"description" : "Calc(select=[order_id, user, product, order_time, 
PROCTIME() AS proctime])"
+  }, {
+"id" : 10,
+"type" : "stream-exec-exchange_1",
+"inputProperties" : [ {
+  "requiredDistribution" : {
+"type" : "HASH",
+"keys" : [ 2 ]
+  },
+  "damBehavior" : "PIPELINED",
+  "priority" : 0
+} ],
+"outputType" : {
+  "type" : "ROW",
+  "fields" : [ {
+"name" : "order_id",
+"fieldType" : "BIGINT"
+  }, {
+"name" : "user",
+"fieldType" : "VARCHAR(2147483647)"
+  }, {
+"name" : "product",
+"fieldType" : 

[jira] [Comment Edited] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784522#comment-17784522
 ] 

Matthias Pohl edited comment on FLINK-33323 at 11/9/23 3:49 PM:


Oh, you're right. I got misguided by the "FATAL" error and ignored the rest of 
the logs. Sorry for that. I'm gonna create a separate issue (FLINK-33502) and 
will continue monitoring whether it's related to my GHA infra work or some test 
instability. I have to work on collecting the build artifacts properly still. 
Closing the issue again.


was (Author: mapohl):
Oh, you're right. I got misguided by the "FATAL" error and ignored the rest of 
the logs. Sorry for that. I'm gonna create a separate issue and will continue 
monitoring whether it's related to my GHA infra work or some test instability. 
I have to work on collecting the build artifacts properly still. Closing the 
issue again.

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33469) Implement restore tests for Limit node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33469.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 7542b56f2abb860f42a83c4687f6e38bb82b78c6

> Implement restore tests for Limit node 
> ---
>
> Key: FLINK-33469
> URL: https://issues.apache.org/jira/browse/FLINK-33469
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33469) Implement restore tests for Limit node

2023-11-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33469:


Assignee: Jim Hughes

> Implement restore tests for Limit node 
> ---
>
> Key: FLINK-33469
> URL: https://issues.apache.org/jira/browse/FLINK-33469
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-33502:
-

 Summary: HybridShuffleITCase caused a fatal error
 Key: FLINK-33502
 URL: https://issues.apache.org/jira/browse/FLINK-33502
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.19.0
Reporter: Matthias Pohl


[https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
{code:java}
Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
output in log
9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
9170Error: 21:21:35 21:21:35.379 [ERROR] 
org.apache.flink.test.runtime.HybridShuffleITCase
9171Error: 21:21:35 21:21:35.379 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
/root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
-Xms256m -XX:+IgnoreUnrecognizedVMOptions 
--add-opens=java.base/java.util=ALL-UNNAMED 
--add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
/root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar 
/root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
surefire6242806641230738408tmp surefire_1603959900047297795160tmp
9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
output in log
9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
9176Error: 21:21:35 21:21:35.379 [ERROR] 
org.apache.flink.test.runtime.HybridShuffleITCase
9177Error: 21:21:35 21:21:35.379 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
9178Error: 21:21:35 21:21:35.379 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
9179Error: 21:21:35 21:21:35.379 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
9180Error: 21:21:35 21:21:35.379 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
[...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33469] Implement restore tests for Limit node [flink]

2023-11-09 Thread via GitHub


dawidwys merged PR #23675:
URL: https://github.com/apache/flink/pull/23675


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33501][ci] Makes use of Maven wrapper [flink]

2023-11-09 Thread via GitHub


snuyanzin commented on PR #23689:
URL: https://github.com/apache/flink/pull/23689#issuecomment-1804083642

   it downloads some binaries to execute
   in maven-wrapper 3.2.0 there was added checksum check should we also 
leverage this feature[1] before execution of downloaded binaries?
   [1] https://issues.apache.org/jira/browse/MWRAPPER-75


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Matthias Pohl (Jira)


[ https://issues.apache.org/jira/browse/FLINK-33323 ]


Matthias Pohl deleted comment on FLINK-33323:
---

was (Author: mapohl):
Closing the issue again. See the comments above

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784522#comment-17784522
 ] 

Matthias Pohl edited comment on FLINK-33323 at 11/9/23 3:47 PM:


Oh, you're right. I got misguided by the "FATAL" error and ignored the rest of 
the logs. Sorry for that. I'm gonna create a separate issue and will continue 
monitoring whether it's related to my GHA infra work or some test instability. 
I have to work on collecting the build artifacts properly still. Closing the 
issue again.


was (Author: mapohl):
Oh, you're right. I got misguided by the "FATAL" error and ignored the rest of 
the logs. Sorry for that. I'm gonna create a separate issue and will continue 
monitoring whether it's related to my GHA infra work or some test instability. 
I have to work on collecting the build artifacts properly still.

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-33323.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

Closing the issue again. See the comments above

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784522#comment-17784522
 ] 

Matthias Pohl commented on FLINK-33323:
---

Oh, you're right. I got misguided by the "FATAL" error and ignored the rest of 
the logs. Sorry for that. I'm gonna create a separate issue and will continue 
monitoring whether it's related to my GHA infra work or some test instability. 
I have to work on collecting the build artifacts properly still.

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33438) HiveITCase.testHiveDialect and HiveITCase.testReadWriteHive are failing

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-33438.
-
Resolution: Won't Fix

Fixed by not running the e2e tests in Docker

> HiveITCase.testHiveDialect and HiveITCase.testReadWriteHive are failing
> ---
>
> Key: FLINK-33438
> URL: https://issues.apache.org/jira/browse/FLINK-33438
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Tests
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: github-actions, test-stability
>
> https://github.com/XComp/flink/actions/runs/6729006580/job/18289544587#step:15:12706
> {code}
> Error: 08:09:00 08:09:00.159 [ERROR] 
> org.apache.flink.tests.hive.HiveITCase.testHiveDialect  Time elapsed: 43.377 
> s  <<< FAILURE!
> Nov 02 08:09:00 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: null. ==> expected:  but was: 
> 
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.checkResultFile(HiveITCase.java:204)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.runAndCheckSQL(HiveITCase.java:161)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.testHiveDialect(HiveITCase.java:131)
> Nov 02 08:09:00   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> Nov 02 08:09:00 
> Error: 08:09:00 08:09:00.159 [ERROR] 
> org.apache.flink.tests.hive.HiveITCase.testReadWriteHive  Time elapsed: 
> 37.006 s  <<< FAILURE!
> Nov 02 08:09:00 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: null. ==> expected:  but was: 
> 
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Nov 02 08:09:00   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.checkResultFile(HiveITCase.java:204)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.runAndCheckSQL(HiveITCase.java:161)
> Nov 02 08:09:00   at 
> org.apache.flink.tests.hive.HiveITCase.testReadWriteHive(HiveITCase.java:121)
> Nov 02 08:09:00   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33418) SqlGatewayE2ECase failed due to ConnectException

2023-11-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-33418.
-
Resolution: Won't Fix

Fixed by not running the e2e tests in Docker

> SqlGatewayE2ECase failed due to ConnectException
> 
>
> Key: FLINK-33418
> URL: https://issues.apache.org/jira/browse/FLINK-33418
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client, Tests
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: github-actions, pull-request-available, test-stability
>
> The container couldn't be started in [this 
> build|https://github.com/XComp/flink/actions/runs/6696839844/job/18195926497#step:15:11765]:
> {code}
> Error: 20:18:40 20:18:40.111 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 110.789 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Error: 20:18:40 20:18:40.111 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase  Time elapsed: 110.789 s  
> <<< ERROR!
> Oct 30 20:18:40 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hdp2.6-hive:10
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Oct 30 20:18:40   at 
> org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Oct 30 20:18:40   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Oct 30 20:18:40   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Oct 30 20:18:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Oct 30 20:18:40   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Oct 30 20:18:40   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Oct 30 20:18:40   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Oct 30 20:18:40   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Oct 30 20:18:40 Caused by: org.rnorth.ducttape.RetryCountExceededException: 
> 

  1   2   >