[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-05-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Fix Version/s: opensearch-2.0.0

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: opensearch-1.2.0, elasticsearch-3.2.0, opensearch-2.0.0
>
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33925) Extended failure handling for bulk requests (elasticsearch back port)

2024-05-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33925:

Fix Version/s: opensearch-2.0.0

> Extended failure handling for bulk requests (elasticsearch back port)
> -
>
> Key: FLINK-33925
> URL: https://issues.apache.org/jira/browse/FLINK-33925
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Peter Schulz
>Assignee: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.2.0, opensearch-2.0.0
>
>
> This is a back port of the implementation for the elasticsearch connector, 
> see FLINK-32028, to achieve consistent APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30537) Add support for OpenSearch 2.3

2024-05-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-30537.
---
Resolution: Fixed

Closed in favor of FLINK-33859

> Add support for OpenSearch 2.3
> --
>
> Key: FLINK-30537
> URL: https://issues.apache.org/jira/browse/FLINK-30537
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Reporter: Martijn Visser
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Create a version for Flink’s Opensearch connector that supports version 2.3.
> From the ASF Flink Slack: 
> https://apache-flink.slack.com/archives/C03GV7L3G2C/p1672339157102319



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34942) Support Flink 1.19, 1.20-SNAPSHOT for OpenSearch connector

2024-05-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34942.
-
Fix Version/s: opensearch-1.2.0
   opensearch-2.0.0
   Resolution: Fixed

> Support Flink 1.19, 1.20-SNAPSHOT for OpenSearch connector
> --
>
> Key: FLINK-34942
> URL: https://issues.apache.org/jira/browse/FLINK-34942
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 3.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.2.0, opensearch-2.0.0
>
>
> Currently it fails with similar issue as FLINK-33493



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34942) Support Flink 1.19, 1.20-SNAPSHOT for OpenSearch connector

2024-05-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846511#comment-17846511
 ] 

Sergey Nuyanzin commented on FLINK-34942:
-

Merged as 
[00f1a5b13bfbadcb8efce8e16fb06ddea0d8e48e|https://github.com/apache/flink-connector-opensearch/commit/00f1a5b13bfbadcb8efce8e16fb06ddea0d8e48e]

> Support Flink 1.19, 1.20-SNAPSHOT for OpenSearch connector
> --
>
> Key: FLINK-34942
> URL: https://issues.apache.org/jira/browse/FLINK-34942
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 3.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.2.0, opensearch-2.0.0
>
>
> Currently it fails with similar issue as FLINK-33493



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846452#comment-17846452
 ] 

Sergey Nuyanzin edited comment on FLINK-27741 at 5/15/24 5:38 AM:
--

Merged to master as 
[40fb49dd17b3e1b6c5aa0249514273730ebe9226|https://github.com/apache/flink/commit/40fb49dd17b3e1b6c5aa0249514273730ebe9226]
1.19: 
[190522c2c051e0ec05213be71fb7a59a517353b1|https://github.com/apache/flink/commit/190522c2c051e0ec05213be71fb7a59a517353b1]
1.18: 
[1e1a7f16b6f272334d9f9a1053b657148151a789|https://github.com/apache/flink/commit/1e1a7f16b6f272334d9f9a1053b657148151a789]


was (Author: sergey nuyanzin):
Merged to master as 
[40fb49dd17b3e1b6c5aa0249514273730ebe9226|https://github.com/apache/flink/commit/40fb49dd17b3e1b6c5aa0249514273730ebe9226]
1.19: 
[190522c2c051e0ec05213be71fb7a59a517353b1|https://github.com/apache/flink/commit/190522c2c051e0ec05213be71fb7a59a517353b1]

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> 

[jira] [Updated] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-27741:

Fix Version/s: 1.18.2

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>   at 
> 

[jira] [Updated] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-27741:

Fix Version/s: 1.19.1

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0, 1.19.1
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>   at 
> 

[jira] [Comment Edited] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846452#comment-17846452
 ] 

Sergey Nuyanzin edited comment on FLINK-27741 at 5/15/24 5:36 AM:
--

Merged to master as 
[40fb49dd17b3e1b6c5aa0249514273730ebe9226|https://github.com/apache/flink/commit/40fb49dd17b3e1b6c5aa0249514273730ebe9226]
1.19: 
[190522c2c051e0ec05213be71fb7a59a517353b1|https://github.com/apache/flink/commit/190522c2c051e0ec05213be71fb7a59a517353b1]


was (Author: sergey nuyanzin):
Merged to master as 
[40fb49dd17b3e1b6c5aa0249514273730ebe9226|https://github.com/apache/flink/commit/40fb49dd17b3e1b6c5aa0249514273730ebe9226]

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> 

[jira] [Resolved] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-27741.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>   at 
> 

[jira] [Commented] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846452#comment-17846452
 ] 

Sergey Nuyanzin commented on FLINK-27741:
-

Merged to master as 
[40fb49dd17b3e1b6c5aa0249514273730ebe9226|https://github.com/apache/flink/commit/40fb49dd17b3e1b6c5aa0249514273730ebe9226]

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   

[jira] [Assigned] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-27741:
---

Assignee: chenzihao

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Assignee: chenzihao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>   at 
> 

[jira] [Updated] (FLINK-33859) Support OpenSearch v2

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33859:

Fix Version/s: opensearch-2.0.0

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-2.0.0
>
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33859) Support OpenSearch v2

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33859.
-
Resolution: Fixed

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-2.0.0
>
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33859) Support OpenSearch v2

2024-05-14 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846215#comment-17846215
 ] 

Sergey Nuyanzin commented on FLINK-33859:
-

Merged as 
[22a2934c32898a1c4016d398333f82772522036f|https://github.com/apache/flink-connector-opensearch/commit/22a2934c32898a1c4016d398333f82772522036f]

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35098) Incorrect results for queries like "10 >= y" on tables using Filesystem connector and Orc format

2024-05-14 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846194#comment-17846194
 ] 

Sergey Nuyanzin commented on FLINK-35098:
-

Merged as
1.18: 
[1f604da2dfc831d04826a20b3cb272d2ad9dfb56|https://github.com/apache/flink/commit/1f604da2dfc831d04826a20b3cb272d2ad9dfb56]
1.19: 
[e16da86dfb1fbeee541cd9dfccd5f5f4520b7396|https://github.com/apache/flink/commit/e16da86dfb1fbeee541cd9dfccd5f5f4520b7396]
master: 
[4165bac27bda4457e5940a994d923242d4a271dc|https://github.com/apache/flink/commit/4165bac27bda4457e5940a994d923242d4a271dc]

> Incorrect results for queries like "10 >= y" on tables using Filesystem 
> connector and Orc format
> 
>
> Key: FLINK-35098
> URL: https://issues.apache.org/jira/browse/FLINK-35098
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ORC, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.12.7, 1.13.6, 1.14.6, 1.15.4, 1.16.3, 1.17.2, 1.19.0, 
> 1.18.1
>Reporter: Andrey Gaskov
>Assignee: Andrey Gaskov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> When working with ORC files, there is an issue with evaluation of SQL queries 
> containing expressions with a literal as the first operand. Specifically, the 
> query *10 >= y* does not always return the correct result.
> This test added to OrcFileSystemITCase.java fails on the second check:
>  
> {code:java}
> @TestTemplate
> void testOrcFilterPushDownLiteralFirst() throws ExecutionException, 
> InterruptedException {
> super.tableEnv()
> .executeSql("insert into orcLimitTable values('a', 10, 10)")
> .await();
> List expected = Collections.singletonList(Row.of(10));
> check("select y from orcLimitTable where y <= 10", expected);
> check("select y from orcLimitTable where 10 >= y", expected);
> }
> Results do not match for query:
>   select y from orcLimitTable where 10 >= y
> Results
>  == Correct Result - 1 ==   == Actual Result - 0 ==
> !+I[10]    {code}
> The checks are equivalent and should evaluate to the same result. But the 
> second query doesn't return the record with y=10.
> The table is defined as:
> {code:java}
> create table orcLimitTable (
> x string,
> y int,
> a int) 
> with (
> 'connector' = 'filesystem',
> 'path' = '/tmp/junit4374176500101507155/junit7109291529844202275/',
> 'format'='orc'){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35098) Incorrect results for queries like "10 >= y" on tables using Filesystem connector and Orc format

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35098.
-
Fix Version/s: 1.18.2
   1.20.0
   1.19.1
   Resolution: Fixed

> Incorrect results for queries like "10 >= y" on tables using Filesystem 
> connector and Orc format
> 
>
> Key: FLINK-35098
> URL: https://issues.apache.org/jira/browse/FLINK-35098
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ORC, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.12.7, 1.13.6, 1.14.6, 1.15.4, 1.16.3, 1.17.2, 1.19.0, 
> 1.18.1
>Reporter: Andrey Gaskov
>Assignee: Andrey Gaskov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> When working with ORC files, there is an issue with evaluation of SQL queries 
> containing expressions with a literal as the first operand. Specifically, the 
> query *10 >= y* does not always return the correct result.
> This test added to OrcFileSystemITCase.java fails on the second check:
>  
> {code:java}
> @TestTemplate
> void testOrcFilterPushDownLiteralFirst() throws ExecutionException, 
> InterruptedException {
> super.tableEnv()
> .executeSql("insert into orcLimitTable values('a', 10, 10)")
> .await();
> List expected = Collections.singletonList(Row.of(10));
> check("select y from orcLimitTable where y <= 10", expected);
> check("select y from orcLimitTable where 10 >= y", expected);
> }
> Results do not match for query:
>   select y from orcLimitTable where 10 >= y
> Results
>  == Correct Result - 1 ==   == Actual Result - 0 ==
> !+I[10]    {code}
> The checks are equivalent and should evaluate to the same result. But the 
> second query doesn't return the record with y=10.
> The table is defined as:
> {code:java}
> create table orcLimitTable (
> x string,
> y int,
> a int) 
> with (
> 'connector' = 'filesystem',
> 'path' = '/tmp/junit4374176500101507155/junit7109291529844202275/',
> 'format'='orc'){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35098) Incorrect results for queries like "10 >= y" on tables using Filesystem connector and Orc format

2024-05-14 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-35098:
---

Assignee: Andrey Gaskov

> Incorrect results for queries like "10 >= y" on tables using Filesystem 
> connector and Orc format
> 
>
> Key: FLINK-35098
> URL: https://issues.apache.org/jira/browse/FLINK-35098
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ORC, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.12.7, 1.13.6, 1.14.6, 1.15.4, 1.16.3, 1.17.2, 1.19.0, 
> 1.18.1
>Reporter: Andrey Gaskov
>Assignee: Andrey Gaskov
>Priority: Major
>  Labels: pull-request-available
>
> When working with ORC files, there is an issue with evaluation of SQL queries 
> containing expressions with a literal as the first operand. Specifically, the 
> query *10 >= y* does not always return the correct result.
> This test added to OrcFileSystemITCase.java fails on the second check:
>  
> {code:java}
> @TestTemplate
> void testOrcFilterPushDownLiteralFirst() throws ExecutionException, 
> InterruptedException {
> super.tableEnv()
> .executeSql("insert into orcLimitTable values('a', 10, 10)")
> .await();
> List expected = Collections.singletonList(Row.of(10));
> check("select y from orcLimitTable where y <= 10", expected);
> check("select y from orcLimitTable where 10 >= y", expected);
> }
> Results do not match for query:
>   select y from orcLimitTable where 10 >= y
> Results
>  == Correct Result - 1 ==   == Actual Result - 0 ==
> !+I[10]    {code}
> The checks are equivalent and should evaluate to the same result. But the 
> second query doesn't return the record with y=10.
> The table is defined as:
> {code:java}
> create table orcLimitTable (
> x string,
> y int,
> a int) 
> with (
> 'connector' = 'filesystem',
> 'path' = '/tmp/junit4374176500101507155/junit7109291529844202275/',
> 'format'='orc'){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845676#comment-17845676
 ] 

Sergey Nuyanzin commented on FLINK-35322:
-

Currently not sure whether we can close this task, since 3.1.0 is under voting 
phase and in case it passes this issue will not be a part of 3.1.0...

let's keep it open so far...


or, WDYT, [~danny.cranmer] since you are a RM for this release

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845675#comment-17845675
 ] 

Sergey Nuyanzin commented on FLINK-35322:
-

Merged as 
[725f3d66e8065e457acab332216a63184db16e0b|https://github.com/apache/flink-connector-gcp-pubsub/commit/725f3d66e8065e457acab332216a63184db16e0b]

Thanks for the fix [~chalixar]

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-35322:
---

Assignee: Ahmed Hamdy

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33859) Support OpenSearch v2

2024-05-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33859:

Affects Version/s: (was: opensearch-1.2.0)

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33859) Support OpenSearch v2

2024-05-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33859:

Affects Version/s: opensearch-1.1.0

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34369) Elasticsearch connector supports SSL context

2024-05-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843415#comment-17843415
 ] 

Sergey Nuyanzin commented on FLINK-34369:
-

The fix version should be after 3.1.0, however 3.1.0 is still in voting stage...
Let's keep it open until there will be another available unreleased version in 
jira or it will be taken in another RC

> Elasticsearch connector supports SSL context
> 
>
> Key: FLINK-34369
> URL: https://issues.apache.org/jira/browse/FLINK-34369
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> The current Flink ElasticSearch connector does not support SSL option, 
> causing issues connecting to secure ES clusters.
> As SSLContext is not serializable and possibly environment aware, we can add 
> a (serializable) provider of SSL context to the {{NetworkClientConfig}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34369) Elasticsearch connector supports SSL context

2024-05-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843414#comment-17843414
 ] 

Sergey Nuyanzin commented on FLINK-34369:
-

Merged as 
[5d1f8d03e3cff197ed7fe30b79951e44808b48fe|https://github.com/apache/flink-connector-elasticsearch/commit/5d1f8d03e3cff197ed7fe30b79951e44808b48fe]

> Elasticsearch connector supports SSL context
> 
>
> Key: FLINK-34369
> URL: https://issues.apache.org/jira/browse/FLINK-34369
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> The current Flink ElasticSearch connector does not support SSL option, 
> causing issues connecting to secure ES clusters.
> As SSLContext is not serializable and possibly environment aware, we can add 
> a (serializable) provider of SSL context to the {{NetworkClientConfig}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34369) Elasticsearch connector supports SSL context

2024-05-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-34369:
---

Assignee: Mingliang Liu

> Elasticsearch connector supports SSL context
> 
>
> Key: FLINK-34369
> URL: https://issues.apache.org/jira/browse/FLINK-34369
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> The current Flink ElasticSearch connector does not support SSL option, 
> causing issues connecting to secure ES clusters.
> As SSLContext is not serializable and possibly environment aware, we can add 
> a (serializable) provider of SSL context to the {{NetworkClientConfig}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33440) Bump flink version on flink-connectors-hbase

2024-04-16 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33440.
---

> Bump flink version on flink-connectors-hbase
> 
>
> Key: FLINK-33440
> URL: https://issues.apache.org/jira/browse/FLINK-33440
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: Ferenc Csaky
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: hbase-4.0.0
>
>
> Follow-up the 1.18 release in the connector repo as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33440) Bump flink version on flink-connectors-hbase

2024-04-16 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33440.
-
Fix Version/s: hbase-4.0.0
   Resolution: Fixed

> Bump flink version on flink-connectors-hbase
> 
>
> Key: FLINK-33440
> URL: https://issues.apache.org/jira/browse/FLINK-33440
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: Ferenc Csaky
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: hbase-4.0.0
>
>
> Follow-up the 1.18 release in the connector repo as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33440) Bump flink version on flink-connectors-hbase

2024-04-16 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837705#comment-17837705
 ] 

Sergey Nuyanzin commented on FLINK-33440:
-

Merged as 
[08b7b69cd82acf3e8ba9af08d715b0b9616af0b0|https://github.com/apache/flink-connector-hbase/commit/08b7b69cd82acf3e8ba9af08d715b0b9616af0b0]

> Bump flink version on flink-connectors-hbase
> 
>
> Key: FLINK-33440
> URL: https://issues.apache.org/jira/browse/FLINK-33440
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: Ferenc Csaky
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Follow-up the 1.18 release in the connector repo as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33440) Bump flink version on flink-connectors-hbase

2024-04-16 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-33440:
---

Assignee: Ferenc Csaky

> Bump flink version on flink-connectors-hbase
> 
>
> Key: FLINK-33440
> URL: https://issues.apache.org/jira/browse/FLINK-33440
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: Ferenc Csaky
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Follow-up the 1.18 release in the connector repo as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Fix Version/s: opensearch-1.2.0

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: opensearch-1.2.0, elasticsearch-3.2.0
>
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837324#comment-17837324
 ] 

Sergey Nuyanzin commented on FLINK-35094:
-

Opensearch connector was fixed within 
https://github.com/apache/flink-connector-opensearch/commit/00f1a5b13bfbadcb8efce8e16fb06ddea0d8e48e

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: elasticsearch-3.2.0
>
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837323#comment-17837323
 ] 

Sergey Nuyanzin commented on FLINK-35094:
-

Merged as 
[516a89864b38e35261675f873e37453df761324d|https://github.com/apache/flink-connector-elasticsearch/commit/516a89864b38e35261675f873e37453df761324d]

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-35094:
---

Assignee: Sergey Nuyanzin

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35094.
-
Fix Version/s: elasticsearch-3.2.0
   Resolution: Fixed

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: elasticsearch-3.2.0
>
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34158) Migrate WindowAggregateReduceFunctionsRule

2024-04-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837086#comment-17837086
 ] 

Sergey Nuyanzin commented on FLINK-34158:
-

Merged as  
[f74dc57561a058696bd2bd42593f862a9b490474|https://github.com/apache/flink/commit/f74dc57561a058696bd2bd42593f862a9b490474]

> Migrate WindowAggregateReduceFunctionsRule
> --
>
> Key: FLINK-34158
> URL: https://issues.apache.org/jira/browse/FLINK-34158
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34158) Migrate WindowAggregateReduceFunctionsRule

2024-04-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34158.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Migrate WindowAggregateReduceFunctionsRule
> --
>
> Key: FLINK-34158
> URL: https://issues.apache.org/jira/browse/FLINK-34158
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34158) Migrate WindowAggregateReduceFunctionsRule

2024-04-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34158.
---

> Migrate WindowAggregateReduceFunctionsRule
> --
>
> Key: FLINK-34158
> URL: https://issues.apache.org/jira/browse/FLINK-34158
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-04-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836128#comment-17836128
 ] 

Sergey Nuyanzin edited comment on FLINK-34961 at 4/12/24 9:45 AM:
--

Merged to flink-connector-kafka main as 
[c47abb3933b7c1e567a9142c6495038d16d42dd0|https://github.com/apache/flink-connector-kafka/commit/c47abb3933b7c1e567a9142c6495038d16d42dd0]
flink-connector-kafka v3.1 
[ef203c7d3bd4e3507cef973bf4fc1e73b24900f8|https://github.com/apache/flink-connector-kafka/commit/ef203c7d3bd4e3507cef973bf4fc1e73b24900f8]

flink-connector-jdbc main 
[64c7b754812fa163946808c92a08c8a3eb6ddc94|https://github.com/apache/flink-connector-jdbc/commit/64c7b754812fa163946808c92a08c8a3eb6ddc94]
flink-connector-jdbc v3.1 
[cb26a9ca7484672bb2d557ff0b70fe4a273c6ffc|https://github.com/apache/flink-connector-jdbc/commit/cb26a9ca7484672bb2d557ff0b70fe4a273c6ffc]


was (Author: sergey nuyanzin):
Merged to flink-connector-kafka main as 
[c47abb3933b7c1e567a9142c6495038d16d42dd0|https://github.com/apache/flink-connector-kafka/commit/c47abb3933b7c1e567a9142c6495038d16d42dd0]

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: kafka-4.0.0
>
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions=flink=168=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-04-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-34961:

Fix Version/s: jdbc-3.2.0
   jdbc-3.1.3
   kafka-3.1.1

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: kafka-4.0.0, jdbc-3.2.0, jdbc-3.1.3, kafka-3.1.1
>
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions=flink=168=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Description: 
Currently it is reproduced for elastic search connector
all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
{noformat}
2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
[0x7f6712513000]
2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING (sleeping)
2024-04-12T05:56:50.6181497Zat 
java.lang.Thread.sleep(java.base@17.0.10/Native Method)
2024-04-12T05:56:50.6182762Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
2024-04-12T05:56:50.6184456Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2024-04-12T05:56:50.6186346Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
2024-04-12T05:56:50.6188474Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
2024-04-12T05:56:50.6190145Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
2024-04-12T05:56:50.6191247Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native 
Method)
2024-04-12T05:56:50.6192806Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
2024-04-12T05:56:50.6193863Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
2024-04-12T05:56:50.6194834Zat 
java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
{noformat}

for 1.17, 1.18, 1.19 there is no such issue and everything is ok
https://github.com/apache/flink-connector-elasticsearch/actions/runs/8538572134

  was:
Currently it is reproduced for elastic search connector
all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
{noformat}
2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
[0x7f6712513000]
2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING (sleeping)
2024-04-12T05:56:50.6181497Zat 
java.lang.Thread.sleep(java.base@17.0.10/Native Method)
2024-04-12T05:56:50.6182762Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
2024-04-12T05:56:50.6184456Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2024-04-12T05:56:50.6186346Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
2024-04-12T05:56:50.6188474Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
2024-04-12T05:56:50.6190145Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
2024-04-12T05:56:50.6191247Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native 
Method)
2024-04-12T05:56:50.6192806Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
2024-04-12T05:56:50.6193863Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
2024-04-12T05:56:50.6194834Zat 
java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
{noformat}

for 1.17, 1.18, 1.19 there is no such issue and everything is ok
https://github.com/apache/flink-connector-elasticsearch/actions/runs/8436631571


> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 

[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Description: 
Currently it is reproduced for elastic search connector
all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
{noformat}
2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
[0x7f6712513000]
2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING (sleeping)
2024-04-12T05:56:50.6181497Zat 
java.lang.Thread.sleep(java.base@17.0.10/Native Method)
2024-04-12T05:56:50.6182762Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
2024-04-12T05:56:50.6184456Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2024-04-12T05:56:50.6186346Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
2024-04-12T05:56:50.6188474Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
2024-04-12T05:56:50.6190145Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
2024-04-12T05:56:50.6191247Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native 
Method)
2024-04-12T05:56:50.6192806Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
2024-04-12T05:56:50.6193863Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
2024-04-12T05:56:50.6194834Zat 
java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
{noformat}

for 1.17, 1.18, 1.19 there is no such issue and everything is ok
https://github.com/apache/flink-connector-elasticsearch/actions/runs/8436631571

  was:
Currently it is reproduced for elastic search connector
all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
{noformat}
2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
[0x7f6712513000]
2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING (sleeping)
2024-04-12T05:56:50.6181497Zat 
java.lang.Thread.sleep(java.base@17.0.10/Native Method)
2024-04-12T05:56:50.6182762Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
2024-04-12T05:56:50.6184456Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2024-04-12T05:56:50.6186346Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
2024-04-12T05:56:50.6188474Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
2024-04-12T05:56:50.6190145Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
2024-04-12T05:56:50.6191247Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native 
Method)
2024-04-12T05:56:50.6192806Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
2024-04-12T05:56:50.6193863Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
2024-04-12T05:56:50.6194834Zat 
java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
{noformat}

for 1.17, 1.18, 1.19 there is no such issue and everything is ok


> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> 

[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Component/s: Tests

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-12 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-35094:
---

 Summary: SinkTestSuiteBase.testScaleDown is hanging for 
1.20-SNAPSHOT
 Key: FLINK-35094
 URL: https://issues.apache.org/jira/browse/FLINK-35094
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: elasticsearch-3.1.0, 1.20.0
Reporter: Sergey Nuyanzin


Currently it is reproduced for elastic search connector
all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
{noformat}
2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
[0x7f6712513000]
2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING (sleeping)
2024-04-12T05:56:50.6181497Zat 
java.lang.Thread.sleep(java.base@17.0.10/Native Method)
2024-04-12T05:56:50.6182762Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
2024-04-12T05:56:50.6184456Zat 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2024-04-12T05:56:50.6186346Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
2024-04-12T05:56:50.6188474Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
2024-04-12T05:56:50.6190145Zat 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
2024-04-12T05:56:50.6191247Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native 
Method)
2024-04-12T05:56:50.6192806Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
2024-04-12T05:56:50.6193863Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
2024-04-12T05:56:50.6194834Zat 
java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
{noformat}

for 1.17, 1.18, 1.19 there is no such issue and everything is ok



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35094) SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT

2024-04-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35094:

Labels: test-stability  (was: )

> SinkTestSuiteBase.testScaleDown is hanging for 1.20-SNAPSHOT
> 
>
> Key: FLINK-35094
> URL: https://issues.apache.org/jira/browse/FLINK-35094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.1.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> Currently it is reproduced for elastic search connector
> all the ci jobs (for all jdks) against 1.20-SNAPSHOT are hanging on 
> {noformat}
> 2024-04-12T05:56:50.6179284Z "main" #1 prio=5 os_prio=0 cpu=18726.96ms 
> elapsed=2522.03s tid=0x7f670c025a50 nid=0x3c6d waiting on condition  
> [0x7f6712513000]
> 2024-04-12T05:56:50.6180667Zjava.lang.Thread.State: TIMED_WAITING 
> (sleeping)
> 2024-04-12T05:56:50.6181497Z  at 
> java.lang.Thread.sleep(java.base@17.0.10/Native Method)
> 2024-04-12T05:56:50.6182762Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152)
> 2024-04-12T05:56:50.6184456Z  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> 2024-04-12T05:56:50.6186346Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.checkResultWithSemantic(SinkTestSuiteBase.java:504)
> 2024-04-12T05:56:50.6188474Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:327)
> 2024-04-12T05:56:50.6190145Z  at 
> org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testScaleDown(SinkTestSuiteBase.java:224)
> 2024-04-12T05:56:50.6191247Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@17.0.10/Native
>  Method)
> 2024-04-12T05:56:50.6192806Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@17.0.10/NativeMethodAccessorImpl.java:77)
> 2024-04-12T05:56:50.6193863Z  at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@17.0.10/DelegatingMethodAccessorImpl.java:43)
> 2024-04-12T05:56:50.6194834Z  at 
> java.lang.reflect.Method.invoke(java.base@17.0.10/Method.java:568)
> {noformat}
> for 1.17, 1.18, 1.19 there is no such issue and everything is ok



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-04-11 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-34961:

Fix Version/s: kafka-4.0.0

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: kafka-4.0.0
>
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions=flink=168=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-04-11 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836128#comment-17836128
 ] 

Sergey Nuyanzin commented on FLINK-34961:
-

Merged to flink-connector-kafka main as 
[c47abb3933b7c1e567a9142c6495038d16d42dd0|https://github.com/apache/flink-connector-kafka/commit/c47abb3933b7c1e567a9142c6495038d16d42dd0]

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, starter
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions=flink=168=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35060) Provide compatibility of old CheckpointMode for connector testing framework

2024-04-11 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-35060.
---

> Provide compatibility of old CheckpointMode for connector testing framework
> ---
>
> Key: FLINK-35060
> URL: https://issues.apache.org/jira/browse/FLINK-35060
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Tests
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> After FLINK-34516, the {{org.apache.flink.streaming.api.CheckpointingMode}} 
> has been moved to {{org.apache.flink.core.execution.CheckpointingMode}}. It 
> introduced a breaking change to connector testing framework as well as to 
> externalized connector repos by mistake. This should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35060) Provide compatibility of old CheckpointMode for connector testing framework

2024-04-11 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35060.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Provide compatibility of old CheckpointMode for connector testing framework
> ---
>
> Key: FLINK-35060
> URL: https://issues.apache.org/jira/browse/FLINK-35060
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Tests
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> After FLINK-34516, the {{org.apache.flink.streaming.api.CheckpointingMode}} 
> has been moved to {{org.apache.flink.core.execution.CheckpointingMode}}. It 
> introduced a breaking change to connector testing framework as well as to 
> externalized connector repos by mistake. This should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35060) Provide compatibility of old CheckpointMode for connector testing framework

2024-04-11 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836035#comment-17836035
 ] 

Sergey Nuyanzin commented on FLINK-35060:
-

Merged as 
[dfb827a38bc81fe4610cd0c88c66b8d5da1c0147|https://github.com/apache/flink/commit/dfb827a38bc81fe4610cd0c88c66b8d5da1c0147]

> Provide compatibility of old CheckpointMode for connector testing framework
> ---
>
> Key: FLINK-35060
> URL: https://issues.apache.org/jira/browse/FLINK-35060
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Tests
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> After FLINK-34516, the {{org.apache.flink.streaming.api.CheckpointingMode}} 
> has been moved to {{org.apache.flink.core.execution.CheckpointingMode}}. It 
> introduced a breaking change to connector testing framework as well as to 
> externalized connector repos by mistake. This should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35023) YARNApplicationITCase failed on Azure

2024-04-10 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35023.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> YARNApplicationITCase failed on Azure
> -
>
> Key: FLINK-35023
> URL: https://issues.apache.org/jira/browse/FLINK-35023
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: jobmanager.log
>
>
> 1. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
> {code:java}
> Apr 06 02:19:44 02:19:44.063 [ERROR] 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
>  -- Time elapsed: 9.727 s <<< FAILURE!
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion$1(YARNApplicationITCase.java:72)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion(YARNApplicationITCase.java:70)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 2. YARNApplicationITCase.testApplicationClusterWithRemoteUserJar
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithRemoteUserJar$2(YARNApplicationITCase.java:86)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithRemoteUserJar(YARNApplicationITCase.java:84)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 3. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion$0(YARNApplicationITCase.java:62)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44

[jira] [Commented] (FLINK-35023) YARNApplicationITCase failed on Azure

2024-04-10 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835626#comment-17835626
 ] 

Sergey Nuyanzin commented on FLINK-35023:
-

Merged as 
[dec5e9e659dd09346781c97c940a20a6cbc63678|https://github.com/apache/flink/commit/dec5e9e659dd09346781c97c940a20a6cbc63678]

> YARNApplicationITCase failed on Azure
> -
>
> Key: FLINK-35023
> URL: https://issues.apache.org/jira/browse/FLINK-35023
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Attachments: jobmanager.log
>
>
> 1. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
> {code:java}
> Apr 06 02:19:44 02:19:44.063 [ERROR] 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
>  -- Time elapsed: 9.727 s <<< FAILURE!
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion$1(YARNApplicationITCase.java:72)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion(YARNApplicationITCase.java:70)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 2. YARNApplicationITCase.testApplicationClusterWithRemoteUserJar
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithRemoteUserJar$2(YARNApplicationITCase.java:86)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithRemoteUserJar(YARNApplicationITCase.java:84)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 3. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion$0(YARNApplicationITCase.java:62)
> Apr 06 02:19:44   at 
> 

[jira] [Assigned] (FLINK-35023) YARNApplicationITCase failed on Azure

2024-04-10 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-35023:
---

Assignee: Ryan Skraba

> YARNApplicationITCase failed on Azure
> -
>
> Key: FLINK-35023
> URL: https://issues.apache.org/jira/browse/FLINK-35023
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Attachments: jobmanager.log
>
>
> 1. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
> {code:java}
> Apr 06 02:19:44 02:19:44.063 [ERROR] 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
>  -- Time elapsed: 9.727 s <<< FAILURE!
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion$1(YARNApplicationITCase.java:72)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion(YARNApplicationITCase.java:70)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 2. YARNApplicationITCase.testApplicationClusterWithRemoteUserJar
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithRemoteUserJar$2(YARNApplicationITCase.java:86)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithRemoteUserJar(YARNApplicationITCase.java:84)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 3. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion$0(YARNApplicationITCase.java:62)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> 

[jira] [Closed] (FLINK-35023) YARNApplicationITCase failed on Azure

2024-04-10 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-35023.
---

> YARNApplicationITCase failed on Azure
> -
>
> Key: FLINK-35023
> URL: https://issues.apache.org/jira/browse/FLINK-35023
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: jobmanager.log
>
>
> 1. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
> {code:java}
> Apr 06 02:19:44 02:19:44.063 [ERROR] 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion
>  -- Time elapsed: 9.727 s <<< FAILURE!
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion$1(YARNApplicationITCase.java:72)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndDisableUserJarInclusion(YARNApplicationITCase.java:70)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 2. YARNApplicationITCase.testApplicationClusterWithRemoteUserJar
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithRemoteUserJar$2(YARNApplicationITCase.java:86)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.testApplicationClusterWithRemoteUserJar(YARNApplicationITCase.java:84)
> Apr 06 02:19:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Apr 06 02:19:44   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> 3. 
> YARNApplicationITCase.testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion
> {code:java}
> Apr 06 02:19:44 java.lang.AssertionError: Application became FAILED or KILLED 
> while expecting FINISHED
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.waitApplicationFinishedElseKillIt(YarnTestBase.java:1282)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.deployApplication(YARNApplicationITCase.java:116)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YARNApplicationITCase.lambda$testApplicationClusterWithLocalUserJarAndFirstUserJarInclusion$0(YARNApplicationITCase.java:62)
> Apr 06 02:19:44   at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:303)
> Apr 06 02:19:44   at 
> 

[jira] [Comment Edited] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-09 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835457#comment-17835457
 ] 

Sergey Nuyanzin edited comment on FLINK-28693 at 4/9/24 6:02 PM:
-

1.18: 
[f5c62abf7475ea8bc976de2a2079b1a9e29b79df|https://github.com/apache/flink/commit/f5c62abf7475ea8bc976de2a2079b1a9e29b79df]
1.19: 
[b1165a89edb9857754e283c6afd7983a34acd465|https://github.com/apache/flink/commit/b1165a89edb9857754e283c6afd7983a34acd465]


was (Author: sergey nuyanzin):
1.18: 
[f5c62abf7475ea8bc976de2a2079b1a9e29b79df|https://github.com/apache/flink/commit/f5c62abf7475ea8bc976de2a2079b1a9e29b79df]


> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0, 1.19.1
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Updated] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-28693:

Fix Version/s: 1.18.2

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Updated] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-28693:

Fix Version/s: 1.19.1

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0, 1.19.1
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-09 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835457#comment-17835457
 ] 

Sergey Nuyanzin commented on FLINK-28693:
-

1.18: 
[f5c62abf7475ea8bc976de2a2079b1a9e29b79df|https://github.com/apache/flink/commit/f5c62abf7475ea8bc976de2a2079b1a9e29b79df]


> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Commented] (FLINK-35059) Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc

2024-04-09 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835313#comment-17835313
 ] 

Sergey Nuyanzin commented on FLINK-35059:
-

Merged as 
[12f778da715635be21ca48cbaa2cd10490d09235|https://github.com/apache/flink-connector-jdbc/commit/12f778da715635be21ca48cbaa2cd10490d09235]

> Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc
> 
>
> Key: FLINK-35059
> URL: https://issues.apache.org/jira/browse/FLINK-35059
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35059) Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-35059.
---

> Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc
> 
>
> Key: FLINK-35059
> URL: https://issues.apache.org/jira/browse/FLINK-35059
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35059) Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35059.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc
> 
>
> Key: FLINK-35059
> URL: https://issues.apache.org/jira/browse/FLINK-35059
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35059) Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-35059:

Summary: Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in 
flink-connector-jdbc  (was: Bump org.postgresql:postgresql from 42.5.1 to 
42.5.5 in flink-connector-jdbc)

> Bump org.postgresql:postgresql from 42.5.1 to 42.7.3 in flink-connector-jdbc
> 
>
> Key: FLINK-35059
> URL: https://issues.apache.org/jira/browse/FLINK-35059
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35059) Bump org.postgresql:postgresql from 42.5.1 to 42.5.5 in flink-connector-jdbc

2024-04-09 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-35059:
---

 Summary: Bump org.postgresql:postgresql from 42.5.1 to 42.5.5 in 
flink-connector-jdbc
 Key: FLINK-35059
 URL: https://issues.apache.org/jira/browse/FLINK-35059
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / JDBC
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35057) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc connector

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35057.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc 
> connector
> ---
>
> Key: FLINK-35057
> URL: https://issues.apache.org/jira/browse/FLINK-35057
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35057) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc connector

2024-04-09 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835170#comment-17835170
 ] 

Sergey Nuyanzin commented on FLINK-35057:
-

Merged as 
[cd48c4a5b88a34934d51dac805d0ce0c42c4ea02|https://github.com/apache/flink-connector-jdbc/commit/cd48c4a5b88a34934d51dac805d0ce0c42c4ea02]

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc 
> connector
> ---
>
> Key: FLINK-35057
> URL: https://issues.apache.org/jira/browse/FLINK-35057
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35057) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc connector

2024-04-09 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-35057.
---

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc 
> connector
> ---
>
> Key: FLINK-35057
> URL: https://issues.apache.org/jira/browse/FLINK-35057
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835149#comment-17835149
 ] 

Sergey Nuyanzin commented on FLINK-35008:
-

I think it would make sense to go with 1.26.1

I faced https://issues.apache.org/jira/browse/COMPRESS-659 while doing same for 
jdbc connector FLINK-35057


https://github.com/apache/flink-connector-jdbc/actions/runs/8602243084/job/23571361570#step:15:266


 

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink 
> Kafka connector
> 
>
> Key: FLINK-35008
> URL: https://issues.apache.org/jira/browse/FLINK-35008
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35057) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.1 for Flink jdbc connector

2024-04-08 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-35057:
---

 Summary: Bump org.apache.commons:commons-compress from 1.25.0 to 
1.26.1 for Flink jdbc connector
 Key: FLINK-35057
 URL: https://issues.apache.org/jira/browse/FLINK-35057
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / JDBC
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33459) Support the new source that keeps the same functionality as the original JDBC input format

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33459.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Support the new source that keeps the same functionality as the original JDBC 
> input format
> --
>
> Key: FLINK-33459
> URL: https://issues.apache.org/jira/browse/FLINK-33459
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: RocMarshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33459) Support the new source that keeps the same functionality as the original JDBC input format

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-33459:
---

Assignee: RocMarshal

> Support the new source that keeps the same functionality as the original JDBC 
> input format
> --
>
> Key: FLINK-33459
> URL: https://issues.apache.org/jira/browse/FLINK-33459
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33459) Support the new source that keeps the same functionality as the original JDBC input format

2024-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834899#comment-17834899
 ] 

Sergey Nuyanzin commented on FLINK-33459:
-

Merged as 
[ab5d6159141bdbe8aed78e24c9500a136efbfac0|https://github.com/apache/flink-connector-jdbc/commit/ab5d6159141bdbe8aed78e24c9500a136efbfac0]

> Support the new source that keeps the same functionality as the original JDBC 
> input format
> --
>
> Key: FLINK-33459
> URL: https://issues.apache.org/jira/browse/FLINK-33459
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: RocMarshal
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34896) Migrate CorrelateSortToRankRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-34896:

Component/s: Table SQL / Planner

> Migrate CorrelateSortToRankRule
> ---
>
> Key: FLINK-34896
> URL: https://issues.apache.org/jira/browse/FLINK-34896
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34649) Migrate PushFilterIntoLegacyTableSourceScanRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34649.
-
Resolution: Fixed

Merged as 
[82116865b01f6e4009a64d0be8c381be616dd070|https://github.com/apache/flink/commit/82116865b01f6e4009a64d0be8c381be616dd070]

> Migrate PushFilterIntoLegacyTableSourceScanRule
> ---
>
> Key: FLINK-34649
> URL: https://issues.apache.org/jira/browse/FLINK-34649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34649) Migrate PushFilterIntoLegacyTableSourceScanRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34649.
---

> Migrate PushFilterIntoLegacyTableSourceScanRule
> ---
>
> Key: FLINK-34649
> URL: https://issues.apache.org/jira/browse/FLINK-34649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34649) Migrate PushFilterIntoLegacyTableSourceScanRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-34649:
---

Assignee: Jacky Lau

> Migrate PushFilterIntoLegacyTableSourceScanRule
> ---
>
> Key: FLINK-34649
> URL: https://issues.apache.org/jira/browse/FLINK-34649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34161) Migrate RewriteMinusAllRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34161.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

Merged as 
[97a67277c1d7878f320ab1c67589e05fdd8b153a|https://github.com/apache/flink/commit/97a67277c1d7878f320ab1c67589e05fdd8b153a]

> Migrate RewriteMinusAllRule
> ---
>
> Key: FLINK-34161
> URL: https://issues.apache.org/jira/browse/FLINK-34161
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34161) Migrate RewriteMinusAllRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34161.
---

> Migrate RewriteMinusAllRule
> ---
>
> Key: FLINK-34161
> URL: https://issues.apache.org/jira/browse/FLINK-34161
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34556) Migrate EnumerableToLogicalTableScan

2024-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834858#comment-17834858
 ] 

Sergey Nuyanzin commented on FLINK-34556:
-

Merged as 
[d9510893004c7cf86d45bed941041600beb20158|https://github.com/apache/flink/commit/d9510893004c7cf86d45bed941041600beb20158]

> Migrate EnumerableToLogicalTableScan
> 
>
> Key: FLINK-34556
> URL: https://issues.apache.org/jira/browse/FLINK-34556
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34556) Migrate EnumerableToLogicalTableScan

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34556.
-
Resolution: Fixed

> Migrate EnumerableToLogicalTableScan
> 
>
> Key: FLINK-34556
> URL: https://issues.apache.org/jira/browse/FLINK-34556
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34556) Migrate EnumerableToLogicalTableScan

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34556.
---

> Migrate EnumerableToLogicalTableScan
> 
>
> Key: FLINK-34556
> URL: https://issues.apache.org/jira/browse/FLINK-34556
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34159) Migrate ConstantRankNumberColumnRemoveRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834857#comment-17834857
 ] 

Sergey Nuyanzin commented on FLINK-34159:
-

Merged as 
[3ec9bfb5ba5e43c001fdd876148ff75722e6e4f9|https://github.com/apache/flink/commit/3ec9bfb5ba5e43c001fdd876148ff75722e6e4f9]

> Migrate ConstantRankNumberColumnRemoveRule
> --
>
> Key: FLINK-34159
> URL: https://issues.apache.org/jira/browse/FLINK-34159
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34159) Migrate ConstantRankNumberColumnRemoveRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34159.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Migrate ConstantRankNumberColumnRemoveRule
> --
>
> Key: FLINK-34159
> URL: https://issues.apache.org/jira/browse/FLINK-34159
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34159) Migrate ConstantRankNumberColumnRemoveRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34159.
---

> Migrate ConstantRankNumberColumnRemoveRule
> --
>
> Key: FLINK-34159
> URL: https://issues.apache.org/jira/browse/FLINK-34159
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34162) Migrate LogicalUnnestRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34162.
---

> Migrate LogicalUnnestRule
> -
>
> Key: FLINK-34162
> URL: https://issues.apache.org/jira/browse/FLINK-34162
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34162) Migrate LogicalUnnestRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834855#comment-17834855
 ] 

Sergey Nuyanzin commented on FLINK-34162:
-

Merged as 
[4d762c2bdc0720e1bf2615e3d30a5741c4688212|https://github.com/apache/flink/commit/4d762c2bdc0720e1bf2615e3d30a5741c4688212]

> Migrate LogicalUnnestRule
> -
>
> Key: FLINK-34162
> URL: https://issues.apache.org/jira/browse/FLINK-34162
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34162) Migrate LogicalUnnestRule

2024-04-08 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34162.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Migrate LogicalUnnestRule
> -
>
> Key: FLINK-34162
> URL: https://issues.apache.org/jira/browse/FLINK-34162
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-03 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-28693.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-03 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833489#comment-17833489
 ] 

Sergey Nuyanzin commented on FLINK-28693:
-

Merged as 
[0acf92f1c8a90dcb3eb2c1038c1cda3344b7b988|https://github.com/apache/flink/commit/0acf92f1c8a90dcb3eb2c1038c1cda3344b7b988]

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> 

[jira] [Assigned] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-04-03 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-28693:
---

Assignee: xuyang

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
>  

[jira] [Closed] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34950.
---

> Disable spotless on Java 21 for connector-shared-utils
> --
>
> Key: FLINK-34950
> URL: https://issues.apache.org/jira/browse/FLINK-34950
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Affects Versions: connector-parent-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.2.0
>
>
> after https://github.com/apache/flink-connector-shared-utils/pull/19
> spotless was stopped being skipped for java17+ in parent pom
> however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34950.
-
Fix Version/s: connector-parent-1.2.0
   Resolution: Fixed

> Disable spotless on Java 21 for connector-shared-utils
> --
>
> Key: FLINK-34950
> URL: https://issues.apache.org/jira/browse/FLINK-34950
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Affects Versions: connector-parent-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.2.0
>
>
> after https://github.com/apache/flink-connector-shared-utils/pull/19
> spotless was stopped being skipped for java17+ in parent pom
> however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831486#comment-17831486
 ] 

Sergey Nuyanzin commented on FLINK-34950:
-

Merged as 
[d719c95235db17f5932d1bb5d917f7d6e195c371|https://github.com/apache/flink-connector-shared-utils/commit/d719c95235db17f5932d1bb5d917f7d6e195c371]

> Disable spotless on Java 21 for connector-shared-utils
> --
>
> Key: FLINK-34950
> URL: https://issues.apache.org/jira/browse/FLINK-34950
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Affects Versions: connector-parent-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> after https://github.com/apache/flink-connector-shared-utils/pull/19
> spotless was stopped being skipped for java17+ in parent pom
> however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34951) Flink-ci-mirror stopped running for commits after 22nd of March

2024-03-27 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-34951:
---

 Summary: Flink-ci-mirror stopped running for commits after 22nd of 
March
 Key: FLINK-34951
 URL: https://issues.apache.org/jira/browse/FLINK-34951
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Sergey Nuyanzin


Blocker since it impacts all branches

if we look at {{flink-ci.flink-master-mirror}} 
https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1

we could see that ci was run against commit 
https://github.com/flink-ci/flink-mirror/commit/4edafcc8b0b96920036a1afaaa37ae87b77668ed

The problem is that this commit was done on 22nd of March and after that there 
were a number of other commits. At the same time there no any ci at 
{{flink-ci.flink-master-mirror}} for commits after that

Same check could be done for other branches

and it doesn't run for newer commits




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34951) Flink-ci-mirror stopped running for commits after 22nd of March

2024-03-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831251#comment-17831251
 ] 

Sergey Nuyanzin commented on FLINK-34951:
-

//cc all release managers since it blocks the builds for releases
[~rmetzger], [~uce], [~fanrui], [~guoweijie]

also [~jingge] since he was able to resolve similar issue last time FLINK-33074


> Flink-ci-mirror stopped running for commits after 22nd of March
> ---
>
> Key: FLINK-34951
> URL: https://issues.apache.org/jira/browse/FLINK-34951
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> Blocker since it impacts all branches
> if we look at {{flink-ci.flink-master-mirror}} 
> https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1
> we could see that ci was run against commit 
> https://github.com/flink-ci/flink-mirror/commit/4edafcc8b0b96920036a1afaaa37ae87b77668ed
> The problem is that this commit was done on 22nd of March and after that 
> there were a number of other commits. At the same time there no any ci at 
> {{flink-ci.flink-master-mirror}} for commits after that
> Same check could be done for other branches
> and it doesn't run for newer commits



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-27 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-34950:
---

 Summary: Disable spotless on Java 21 for connector-shared-utils
 Key: FLINK-34950
 URL: https://issues.apache.org/jira/browse/FLINK-34950
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Parent
Affects Versions: connector-parent-1.1.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


after https://github.com/apache/flink-connector-shared-utils/pull/19
spotless was stopped being skipped for java17+ in parent pom
however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34943) Support Flink 1.19, 1.20-SNAPSHOT for JDBC connector

2024-03-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831210#comment-17831210
 ] 

Sergey Nuyanzin edited comment on FLINK-34943 at 3/27/24 8:19 AM:
--

fyi there is already an existing PR for that submitted  about 10 days ago and 
passing existing tests
https://github.com/apache/flink-connector-jdbc/pull/107


was (Author: sergey nuyanzin):
fyi there is already an existing PR for that submitted  about 10 days ago and 
passing existing tests


> Support Flink 1.19, 1.20-SNAPSHOT for JDBC connector
> 
>
> Key: FLINK-34943
> URL: https://issues.apache.org/jira/browse/FLINK-34943
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Zhongqiang Gong
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34943) Support Flink 1.19, 1.20-SNAPSHOT for JDBC connector

2024-03-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831210#comment-17831210
 ] 

Sergey Nuyanzin commented on FLINK-34943:
-

fyi there is already an existing PR for that submitted  about 10 days ago and 
passing existing tests


> Support Flink 1.19, 1.20-SNAPSHOT for JDBC connector
> 
>
> Key: FLINK-34943
> URL: https://issues.apache.org/jira/browse/FLINK-34943
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Zhongqiang Gong
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34941) Cannot convert org.apache.flink.streaming.api.CheckpointingMode to org.apache.flink.core.execution.CheckpointingMode

2024-03-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34941.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Cannot convert org.apache.flink.streaming.api.CheckpointingMode  to 
> org.apache.flink.core.execution.CheckpointingMode
> -
>
> Key: FLINK-34941
> URL: https://issues.apache.org/jira/browse/FLINK-34941
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Connectors / ElasticSearch, Runtime / 
> Checkpointing
>Affects Versions: 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> After this change FLINK-34516 elasticsearch connector for 1.20-SNAPSHOT 
> starts failing with
> {noformat}
>  Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[75,5]
>  method does not override or implement a method from a supertype
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[85,84]
>  incompatible types: org.apache.flink.streaming.api.CheckpointingMode cannot 
> be converted to org.apache.flink.core.execution.CheckpointingMode
> {noformat}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8436631571/job/23104522666#step:15:12668
> set blocker since now every build of elasticsearch connector against  
> 1.20-SNAPSHOT  is failing
> probably same issue is for opensearch connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34941) Cannot convert org.apache.flink.streaming.api.CheckpointingMode to org.apache.flink.core.execution.CheckpointingMode

2024-03-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831207#comment-17831207
 ] 

Sergey Nuyanzin commented on FLINK-34941:
-

Merged as 
[7d390a982f1f6c42b98ce85ed38f60221bb7b526|https://github.com/apache/flink/commit/7d390a982f1f6c42b98ce85ed38f60221bb7b526]

> Cannot convert org.apache.flink.streaming.api.CheckpointingMode  to 
> org.apache.flink.core.execution.CheckpointingMode
> -
>
> Key: FLINK-34941
> URL: https://issues.apache.org/jira/browse/FLINK-34941
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Connectors / ElasticSearch, Runtime / 
> Checkpointing
>Affects Versions: 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available
>
> After this change FLINK-34516 elasticsearch connector for 1.20-SNAPSHOT 
> starts failing with
> {noformat}
>  Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[75,5]
>  method does not override or implement a method from a supertype
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[85,84]
>  incompatible types: org.apache.flink.streaming.api.CheckpointingMode cannot 
> be converted to org.apache.flink.core.execution.CheckpointingMode
> {noformat}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8436631571/job/23104522666#step:15:12668
> set blocker since now every build of elasticsearch connector against  
> 1.20-SNAPSHOT  is failing
> probably same issue is for opensearch connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34941) Cannot convert org.apache.flink.streaming.api.CheckpointingMode to org.apache.flink.core.execution.CheckpointingMode

2024-03-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34941.
---

> Cannot convert org.apache.flink.streaming.api.CheckpointingMode  to 
> org.apache.flink.core.execution.CheckpointingMode
> -
>
> Key: FLINK-34941
> URL: https://issues.apache.org/jira/browse/FLINK-34941
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Connectors / ElasticSearch, Runtime / 
> Checkpointing
>Affects Versions: 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> After this change FLINK-34516 elasticsearch connector for 1.20-SNAPSHOT 
> starts failing with
> {noformat}
>  Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[75,5]
>  method does not override or implement a method from a supertype
> Error:  
> /home/runner/work/flink-connector-elasticsearch/flink-connector-elasticsearch/flink-connector-elasticsearch-e2e-tests/flink-connector-elasticsearch-e2e-tests-common/src/main/java/org/apache/flink/streaming/tests/ElasticsearchSinkE2ECaseBase.java:[85,84]
>  incompatible types: org.apache.flink.streaming.api.CheckpointingMode cannot 
> be converted to org.apache.flink.core.execution.CheckpointingMode
> {noformat}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/8436631571/job/23104522666#step:15:12668
> set blocker since now every build of elasticsearch connector against  
> 1.20-SNAPSHOT  is failing
> probably same issue is for opensearch connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >