[jira] [Comment Edited] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819508#comment-17819508
 ] 

Matthias Pohl edited comment on FLINK-29114 at 2/22/24 7:42 AM:


* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844789710#step:10:11564
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844781436#step:10:11488
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844769564#step:10:11615
* 
https://github.com/apache/flink/actions/runs/7998749202/job/21845587274#step:10:11630


was (Author: mapohl):
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844789710#step:10:11564
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844781436#step:10:11488
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844769564#step:10:11615

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819508#comment-17819508
 ] 

Matthias Pohl commented on FLINK-29114:
---

* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844789710#step:10:11564
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844781436#step:10:11488
* 
https://github.com/apache/flink/actions/runs/7998412179/job/21844769564#step:10:11615

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Commented] (FLINK-26644) python StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies failed on azure

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819507#comment-17819507
 ] 

Matthias Pohl commented on FLINK-26644:
---

https://github.com/apache/flink/actions/runs/7998412166/job/21844766133#step:10:24268

> python 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies 
> failed on azure
> ---
>
> Key: FLINK-26644
> URL: https://issues.apache.org/jira/browse/FLINK-26644
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.4, 1.15.0, 1.16.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-03-14T18:50:24.6842853Z Mar 14 18:50:24 
> === FAILURES 
> ===
> 2022-03-14T18:50:24.6844089Z Mar 14 18:50:24 _ 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies _
> 2022-03-14T18:50:24.6844846Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6846063Z Mar 14 18:50:24 self = 
>   testMethod=test_generate_stream_graph_with_dependencies>
> 2022-03-14T18:50:24.6847104Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6847766Z Mar 14 18:50:24 def 
> test_generate_stream_graph_with_dependencies(self):
> 2022-03-14T18:50:24.6848677Z Mar 14 18:50:24 python_file_dir = 
> os.path.join(self.tempdir, "python_file_dir_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6849833Z Mar 14 18:50:24 os.mkdir(python_file_dir)
> 2022-03-14T18:50:24.6850729Z Mar 14 18:50:24 python_file_path = 
> os.path.join(python_file_dir, "test_stream_dependency_manage_lib.py")
> 2022-03-14T18:50:24.6852679Z Mar 14 18:50:24 with 
> open(python_file_path, 'w') as f:
> 2022-03-14T18:50:24.6853646Z Mar 14 18:50:24 f.write("def 
> add_two(a):\nreturn a + 2")
> 2022-03-14T18:50:24.6854394Z Mar 14 18:50:24 env = self.env
> 2022-03-14T18:50:24.6855019Z Mar 14 18:50:24 
> env.add_python_file(python_file_path)
> 2022-03-14T18:50:24.6855519Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6856254Z Mar 14 18:50:24 def plus_two_map(value):
> 2022-03-14T18:50:24.6857045Z Mar 14 18:50:24 from 
> test_stream_dependency_manage_lib import add_two
> 2022-03-14T18:50:24.6857865Z Mar 14 18:50:24 return value[0], 
> add_two(value[1])
> 2022-03-14T18:50:24.6858466Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6858924Z Mar 14 18:50:24 def add_from_file(i):
> 2022-03-14T18:50:24.6859806Z Mar 14 18:50:24 with 
> open("data/data.txt", 'r') as f:
> 2022-03-14T18:50:24.6860266Z Mar 14 18:50:24 return i[0], 
> i[1] + int(f.read())
> 2022-03-14T18:50:24.6860879Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6862022Z Mar 14 18:50:24 from_collection_source = 
> env.from_collection([('a', 0), ('b', 0), ('c', 1), ('d', 1),
> 2022-03-14T18:50:24.6863259Z Mar 14 18:50:24  
>  ('e', 2)],
> 2022-03-14T18:50:24.6864057Z Mar 14 18:50:24  
> type_info=Types.ROW([Types.STRING(),
> 2022-03-14T18:50:24.6864651Z Mar 14 18:50:24  
>  Types.INT()]))
> 2022-03-14T18:50:24.6865150Z Mar 14 18:50:24 
> from_collection_source.name("From Collection")
> 2022-03-14T18:50:24.6866212Z Mar 14 18:50:24 keyed_stream = 
> from_collection_source.key_by(lambda x: x[1], key_type=Types.INT())
> 2022-03-14T18:50:24.6867083Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6867793Z Mar 14 18:50:24 plus_two_map_stream = 
> keyed_stream.map(plus_two_map).name("Plus Two Map").set_parallelism(3)
> 2022-03-14T18:50:24.6868620Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6869412Z Mar 14 18:50:24 add_from_file_map = 
> plus_two_map_stream.map(add_from_file).name("Add From File Map")
> 2022-03-14T18:50:24.6870239Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6870883Z Mar 14 18:50:24 test_stream_sink = 
> add_from_file_map.add_sink(self.test_sink).name("Test Sink")
> 2022-03-14T18:50:24.6871803Z Mar 14 18:50:24 
> test_stream_sink.set_parallelism(4)
> 2022-03-14T18:50:24.6872291Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6872756Z Mar 14 18:50:24 archive_dir_path = 
> os.path.join(self.tempdir, "archive_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6873557Z Mar 14 18:50:24 
> os.mkdir(archive_dir_path)
> 2022-03-14T18:50:24.6874817Z Mar 14 18:50:24 with 
> open(os.path.join(archive_dir_path, "data.txt"), 'w') as f:
> 2022-03-14T18:50:24.6875414Z Mar 14 18:50:24 f.write("3")
> 2022-03-14T18:50:24.6875906Z Mar 14 18:50:24 archive_file_path = \

[jira] [Created] (FLINK-34495) Resuming Savepoint (rocks, scale up, heap timers) end-to-end test failure

2024-02-21 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34495:
-

 Summary: Resuming Savepoint (rocks, scale up, heap timers) 
end-to-end test failure
 Key: FLINK-34495
 URL: https://issues.apache.org/jira/browse/FLINK-34495
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.20.0
Reporter: Matthias Pohl


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57760=logs=e9d3d34f-3d15-59f4-0e3e-35067d100dfe=5d91035e-8022-55f2-2d4f-ab121508bf7e=2010

I guess the failure occurred due to the existence of a checkpoint failure:
{code}
Feb 22 00:49:16 2024-02-22 00:49:04,305 WARN  
org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to 
trigger or complete checkpoint 12 for job 3c9ffc670ead2cb3c4118410cbef3b72. (0 
consecutive failed attempts so far)
Feb 22 00:49:16 org.apache.flink.runtime.checkpoint.CheckpointException: 
Checkpoint Coordinator is suspending.
Feb 22 00:49:16 at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.stopCheckpointScheduler(CheckpointCoordinator.java:2056)
 ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.scheduler.SchedulerBase.stopCheckpointScheduler(SchedulerBase.java:960)
 ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.scheduler.SchedulerBase.stopWithSavepoint(SchedulerBase.java:1030)
 ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.jobmaster.JobMaster.stopWithSavepoint(JobMaster.java:901)
 ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
Feb 22 00:49:16 at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 ~[?:?]
Feb 22 00:49:16 at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
Feb 22 00:49:16 at java.lang.reflect.Method.invoke(Method.java:566) 
~[?:?]
Feb 22 00:49:16 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309)
 ~[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
 ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307)
 ~[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222)
 ~[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)
 ~[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)
 ~[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) 
[flink-rpc-akkad6c8f388-439d-487d-ab4d-9a34a56cbc0d.jar:1.20-SNAPSHOT]
Feb 22 00:49:16 at 
org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) 

[jira] [Commented] (FLINK-34424) BoundedBlockingSubpartitionWriteReadTest#testRead10ConsumersConcurrent times out

2024-02-21 Thread Yunfeng Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819495#comment-17819495
 ] 

Yunfeng Zhou commented on FLINK-34424:
--

Hi [~mapohl]  and [~pnowojski], [~tanyuxin] and I have not been able to find 
the cause of this problem for now. As far as I have investigated, a Java thread 
might be blocked without an explicit "waiting to lock ..." hint if it involves 
JNI calls like MonitorEnter/MonitorExit, internal locks like Semaphore and 
CountDownLatch, or low-level synchronization primitives like 
LockSupport.park(). However I did not find any match of these patterns in the 
blocked thread's stack. Besides, it seems that Java's GC might also cause a 
thread to be in blocking status, so we are not even sure there are blocking or 
deadlock issues to resolve.

Given that we could not reproduce this error in our local environment, nor have 
we found a similar error in Flink CI history, it seems that the error is a 
low-probability issue that can be temporarily ignored. Could we mark this issue 
as low priority for now and maybe revisit it when we have more inputs on 
exceptions like this?

> BoundedBlockingSubpartitionWriteReadTest#testRead10ConsumersConcurrent times 
> out
> 
>
> Key: FLINK-34424
> URL: https://issues.apache.org/jira/browse/FLINK-34424
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Assignee: Yunfeng Zhou
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57446=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9151
> {code}
> Feb 11 13:55:29 "ForkJoinPool-50-worker-25" #414 daemon prio=5 os_prio=0 
> tid=0x7f19503af800 nid=0x284c in Object.wait() [0x7f191b6db000]
> Feb 11 13:55:29java.lang.Thread.State: WAITING (on object monitor)
> Feb 11 13:55:29   at java.lang.Object.wait(Native Method)
> Feb 11 13:55:29   at java.lang.Thread.join(Thread.java:1252)
> Feb 11 13:55:29   - locked <0xe2e019a8> (a 
> org.apache.flink.runtime.io.network.partition.BoundedBlockingSubpartitionWriteReadTest$LongReader)
> Feb 11 13:55:29   at 
> org.apache.flink.core.testutils.CheckedThread.trySync(CheckedThread.java:104)
> Feb 11 13:55:29   at 
> org.apache.flink.core.testutils.CheckedThread.sync(CheckedThread.java:92)
> Feb 11 13:55:29   at 
> org.apache.flink.core.testutils.CheckedThread.sync(CheckedThread.java:81)
> Feb 11 13:55:29   at 
> org.apache.flink.runtime.io.network.partition.BoundedBlockingSubpartitionWriteReadTest.testRead10ConsumersConcurrent(BoundedBlockingSubpartitionWriteReadTest.java:177)
> Feb 11 13:55:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-25537] [JUnit5 Migration] Module: flink-core package api-common [flink]

2024-02-21 Thread via GitHub


snuyanzin commented on PR #23960:
URL: https://github.com/apache/flink/pull/23960#issuecomment-1958812264

   @GOODBOY008 can you please double check that the amount of tests before this 
PR and after is still same?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33500][Runtime] Run storing the JobGraph an asynchronous operation [flink]

2024-02-21 Thread via GitHub


flinkbot commented on PR #24366:
URL: https://github.com/apache/flink/pull/24366#issuecomment-1958766054

   
   ## CI report:
   
   * 8ce3e530ab2aef4d06a98f3c0cfa55a49796deee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-18255) Add API annotations to RocksDB user-facing classes

2024-02-21 Thread Jinzhong Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819478#comment-17819478
 ] 

Jinzhong Li commented on FLINK-18255:
-

[~yunta]  [~Yanfei Lei]    I have drafted a flip 
([FLIP-420|https://cwiki.apache.org/confluence/display/FLINK/FLIP-420%3A+Add+API+annotations+for+RocksDB+StateBackend+user-facing+classes])
 and opened a 
[discussion|https://lists.apache.org/thread/4t71lz2j2ft8hf90ylvtomynhr2qthoo] 
on dev mail list. Could you please help review the interface annotations change?

> Add API annotations to RocksDB user-facing classes
> --
>
> Key: FLINK-18255
> URL: https://issues.apache.org/jira/browse/FLINK-18255
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.11.0
>Reporter: Nico Kruber
>Assignee: Jinzhong Li
>Priority: Major
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Fix For: 1.19.0
>
>
> Several user-facing classes in {{flink-statebackend-rocksdb}} don't have any 
> API annotations, not even {{@PublicEvolving}}. These should be added to 
> clarify their usage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34458) Rename options for Generalized incremental checkpoints (changelog)

2024-02-21 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-34458.
--
Resolution: Fixed

merged e7e973e2 into master.

> Rename options for Generalized incremental checkpoints (changelog)
> --
>
> Key: FLINK-34458
> URL: https://issues.apache.org/jira/browse/FLINK-34458
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34458][checkpointing] Rename options for Generalized incremental checkpoints (changelog) [flink]

2024-02-21 Thread via GitHub


masteryhx closed pull request #24324: [FLINK-34458][checkpointing] Rename 
options for Generalized incremental checkpoints (changelog)
URL: https://github.com/apache/flink/pull/24324


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33500][Runtime] Run storing the JobGraph an asynchronous operation [flink]

2024-02-21 Thread via GitHub


zhengzhili333 opened a new pull request, #24366:
URL: https://github.com/apache/flink/pull/24366

   ## What is the purpose of the change
   
   *Currently, submitting a job starts with storing the JobGraph (in HA setups) 
in the JobGraphStore. This includes writing the file to S3 (or some other 
remote file system). The job submission is done in the Dispatcher's main 
thread. If writing the JobGraph is slow, it would block any other operation on 
the Dispatcher.*
   
   
   ## Brief change log
   
 - *The dispatcher put JobGraph asynchronously in ioExecutor*
 - *The dispatcher write To ExecutionGraphInfoStore asynchronously in 
ioExecutor*
 - *The JobGraphWriter interface class adds a putJobGraphAsync method for 
asynchronous write operations*
 - *Implementation class DefaultJobGraphStore adds the putJobGraphAsync 
method for asynchronous write operations*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added  the Dispatcher JobSubmission test, use ZooKeeperStateHandleStore 
as JobGraphStore*
 - *Added  the Dispatcher JobSubmission test, use 
KubernetesStateHandleStore as JobGraphStore*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-29802) ChangelogStateBackend supports native savepoint

2024-02-21 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-29802.
--
Fix Version/s: 1.20.0
   (was: 1.19.0)
   Resolution: Fixed

merged b62de02f...9308e10c into master.

> ChangelogStateBackend supports native savepoint
> ---
>
> Key: FLINK-29802
> URL: https://issues.apache.org/jira/browse/FLINK-29802
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-29802][state] Changelog supports native savepoint [flink]

2024-02-21 Thread via GitHub


masteryhx closed pull request #22744: [FLINK-29802][state] Changelog supports 
native savepoint
URL: https://github.com/apache/flink/pull/22744


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34354) Release Testing: Verify FLINK-34037 Improve Serialization Configuration and Usage in Flink

2024-02-21 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-34354:
--
Description: 
This issue aims to verify 
[FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
 also works for:
 ** Regieter serializer for POJO types
 ** Register Kryo serializer for generic types
 ** Register Kryo serializer as the default Kryo serializer for types
 ** Register custome type info factories

  was:
This issue aims to verify 
[FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
 also work.


> Release Testing: Verify FLINK-34037 Improve Serialization Configuration and 
> Usage in Flink
> --
>
> Key: FLINK-34354
> URL: https://issues.apache.org/jira/browse/FLINK-34354
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify 
> [FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].
> Volunteers can verify it by following the [doc 
> changes|https://github.com/apache/flink/pull/24230]. Basically, two parts 
> need to be verfied:
>  # The old way of configuring serialization via hard-code method calls still 
> works:
>  ** ExecutionConfig#registerType (for both POJO and generic types)
>  ** ExecutionConfig#addDefaultKryoSerializer
>  ** ExecutionConfig#registerTypeWithKryoSerializer
>  # The new way of configuring serialization via config option 
> [pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
>  also works for:
>  ** Regieter serializer for POJO types
>  ** Register Kryo serializer for generic types
>  ** Register Kryo serializer as the default Kryo serializer for types
>  ** Register custome type info factories



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34354) Release Testing: Verify FLINK-34037 Improve Serialization Configuration and Usage in Flink

2024-02-21 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-34354:
--
Description: 
This issue aims to verify 
[FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
 also works for:
 ** Regieter serializer for POJO types
 ** Register Kryo serializer for generic types
 ** Register Kryo serializer as the default Kryo serializer for types
 ** Register custom type info factories

  was:
This issue aims to verify 
[FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
 also works for:
 ** Regieter serializer for POJO types
 ** Register Kryo serializer for generic types
 ** Register Kryo serializer as the default Kryo serializer for types
 ** Register custome type info factories


> Release Testing: Verify FLINK-34037 Improve Serialization Configuration and 
> Usage in Flink
> --
>
> Key: FLINK-34354
> URL: https://issues.apache.org/jira/browse/FLINK-34354
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify 
> [FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].
> Volunteers can verify it by following the [doc 
> changes|https://github.com/apache/flink/pull/24230]. Basically, two parts 
> need to be verfied:
>  # The old way of configuring serialization via hard-code method calls still 
> works:
>  ** ExecutionConfig#registerType (for both POJO and generic types)
>  ** ExecutionConfig#addDefaultKryoSerializer
>  ** ExecutionConfig#registerTypeWithKryoSerializer
>  # The new way of configuring serialization via config option 
> [pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
>  also works for:
>  ** Regieter serializer for POJO types
>  ** Register Kryo serializer for generic types
>  ** Register Kryo serializer as the default Kryo serializer for types
>  ** Register custom type info factories



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34354) Release Testing: Verify FLINK-34037 Improve Serialization Configuration and Usage in Flink

2024-02-21 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-34354:
--
Description: 
This issue aims to verify 
[FLIP-398|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].]

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]|#pipeline-serialization-config]
 also work.

  was:
Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230], more background can be 
found at . Basically, two parts need to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works.
 # The new way of configuring serialization via config options also work.


> Release Testing: Verify FLINK-34037 Improve Serialization Configuration and 
> Usage in Flink
> --
>
> Key: FLINK-34354
> URL: https://issues.apache.org/jira/browse/FLINK-34354
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify 
> [FLIP-398|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].]
> Volunteers can verify it by following the [doc 
> changes|https://github.com/apache/flink/pull/24230]. Basically, two parts 
> need to be verfied:
>  # The old way of configuring serialization via hard-code method calls still 
> works:
>  ** ExecutionConfig#registerType (for both POJO and generic types)
>  ** ExecutionConfig#addDefaultKryoSerializer
>  ** ExecutionConfig#registerTypeWithKryoSerializer
>  # The new way of configuring serialization via config option 
> [[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]|#pipeline-serialization-config]
>  also work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34354) Release Testing: Verify FLINK-34037 Improve Serialization Configuration and Usage in Flink

2024-02-21 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-34354:
--
Description: 
This issue aims to verify 
[FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
 also work.

  was:
This issue aims to verify 
[FLIP-398|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].]

Volunteers can verify it by following the [doc 
changes|https://github.com/apache/flink/pull/24230]. Basically, two parts need 
to be verfied:
 # The old way of configuring serialization via hard-code method calls still 
works:
 ** ExecutionConfig#registerType (for both POJO and generic types)
 ** ExecutionConfig#addDefaultKryoSerializer
 ** ExecutionConfig#registerTypeWithKryoSerializer
 # The new way of configuring serialization via config option 
[[pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]|#pipeline-serialization-config]
 also work.


> Release Testing: Verify FLINK-34037 Improve Serialization Configuration and 
> Usage in Flink
> --
>
> Key: FLINK-34354
> URL: https://issues.apache.org/jira/browse/FLINK-34354
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This issue aims to verify 
> [FLIP-398|https://cwiki.apache.org/confluence/display/FLINK/FLIP-398%3A+Improve+Serialization+Configuration+And+Usage+In+Flink].
> Volunteers can verify it by following the [doc 
> changes|https://github.com/apache/flink/pull/24230]. Basically, two parts 
> need to be verfied:
>  # The old way of configuring serialization via hard-code method calls still 
> works:
>  ** ExecutionConfig#registerType (for both POJO and generic types)
>  ** ExecutionConfig#addDefaultKryoSerializer
>  ** ExecutionConfig#registerTypeWithKryoSerializer
>  # The new way of configuring serialization via config option 
> [pipeline.serialization-config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config]
>  also work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-02-21 Thread via GitHub


wanglijie95 commented on PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#issuecomment-1958712785

   @eskabetxe FYI, the failed tests should have been fixed in FLINK-34358


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


mohitjain2504 commented on code in PR #24365:
URL: https://github.com/apache/flink/pull/24365#discussion_r1498640212


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/SplitFunction.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#SPLIT}. */
+@Internal
+public class SplitFunction extends BuiltInScalarFunction {
+public SplitFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.SPLIT, context);
+}
+
+public @Nullable ArrayData eval(@Nullable StringData string, @Nullable 
StringData delimiter) {
+try {
+if (string == null || delimiter == null) {
+return null;
+}
+String string1 = string.toString();
+String delimiter1 = delimiter.toString();
+List resultList = new ArrayList<>();
+
+if (delimiter1.equals("")) {
+for (char c : string1.toCharArray()) {
+
resultList.add(BinaryStringData.fromString(String.valueOf(c)));
+}
+} else {
+int start = 0;
+int end = string1.indexOf(delimiter1);
+while (end != -1) {
+String substring = string1.substring(start, end);
+resultList.add(
+BinaryStringData.fromString(
+substring.isEmpty()
+? ""
+: substring)); // Added this check 
to handle consecutive
+// delimiters
+start = end + delimiter1.length();
+end = string1.indexOf(delimiter1, start);
+}
+String remaining = string1.substring(start);
+resultList.add(
+BinaryStringData.fromString(
+remaining.isEmpty()
+? ""

Review Comment:
   Similar issue: isEmpty check is redundant



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


mohitjain2504 commented on code in PR #24365:
URL: https://github.com/apache/flink/pull/24365#discussion_r1498639719


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/SplitFunction.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#SPLIT}. */
+@Internal
+public class SplitFunction extends BuiltInScalarFunction {
+public SplitFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.SPLIT, context);
+}
+
+public @Nullable ArrayData eval(@Nullable StringData string, @Nullable 
StringData delimiter) {
+try {
+if (string == null || delimiter == null) {
+return null;
+}
+String string1 = string.toString();

Review Comment:
   can change the same to str and delim instead of string1 and delimiter1



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


mohitjain2504 commented on code in PR #24365:
URL: https://github.com/apache/flink/pull/24365#discussion_r1498639316


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/SplitFunction.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#SPLIT}. */
+@Internal
+public class SplitFunction extends BuiltInScalarFunction {
+public SplitFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.SPLIT, context);
+}
+
+public @Nullable ArrayData eval(@Nullable StringData string, @Nullable 
StringData delimiter) {
+try {
+if (string == null || delimiter == null) {
+return null;
+}
+String string1 = string.toString();
+String delimiter1 = delimiter.toString();
+List resultList = new ArrayList<>();
+
+if (delimiter1.equals("")) {
+for (char c : string1.toCharArray()) {
+
resultList.add(BinaryStringData.fromString(String.valueOf(c)));
+}
+} else {
+int start = 0;
+int end = string1.indexOf(delimiter1);
+while (end != -1) {
+String substring = string1.substring(start, end);
+resultList.add(
+BinaryStringData.fromString(
+substring.isEmpty()
+? ""

Review Comment:
   This check seems to be redundant. 
   substring.isEmpty() returns `true` in case substring=""; else return false. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


mohitjain2504 commented on code in PR #24365:
URL: https://github.com/apache/flink/pull/24365#discussion_r1498635291


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/SplitFunction.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#SPLIT}. */
+@Internal
+public class SplitFunction extends BuiltInScalarFunction {
+public SplitFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.SPLIT, context);
+}
+
+public @Nullable ArrayData eval(@Nullable StringData string, @Nullable 
StringData delimiter) {
+try {
+if (string == null || delimiter == null) {
+return null;
+}
+String string1 = string.toString();
+String delimiter1 = delimiter.toString();
+List resultList = new ArrayList<>();
+
+if (delimiter1.equals("")) {

Review Comment:
   delimiter1.isEmpty() can be a cleaner approach here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


mohitjain2504 commented on code in PR #24365:
URL: https://github.com/apache/flink/pull/24365#discussion_r1498635291


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/SplitFunction.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#SPLIT}. */
+@Internal
+public class SplitFunction extends BuiltInScalarFunction {
+public SplitFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.SPLIT, context);
+}
+
+public @Nullable ArrayData eval(@Nullable StringData string, @Nullable 
StringData delimiter) {
+try {
+if (string == null || delimiter == null) {
+return null;
+}
+String string1 = string.toString();
+String delimiter1 = delimiter.toString();
+List resultList = new ArrayList<>();
+
+if (delimiter1.equals("")) {

Review Comment:
   delim.isEmpty() can be a cleaner approach here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34494) Migrate ReplaceIntersectWithSemiJoinRule

2024-02-21 Thread Jacky Lau (Jira)
Jacky Lau created FLINK-34494:
-

 Summary: Migrate ReplaceIntersectWithSemiJoinRule
 Key: FLINK-34494
 URL: https://issues.apache.org/jira/browse/FLINK-34494
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.20.0
Reporter: Jacky Lau
 Fix For: 1.20.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34493) Migrate ReplaceMinusWithAntiJoinRule

2024-02-21 Thread Jacky Lau (Jira)
Jacky Lau created FLINK-34493:
-

 Summary: Migrate ReplaceMinusWithAntiJoinRule
 Key: FLINK-34493
 URL: https://issues.apache.org/jira/browse/FLINK-34493
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.20.0
Reporter: Jacky Lau
 Fix For: 1.20.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


flinkbot commented on PR #24365:
URL: https://github.com/apache/flink/pull/24365#issuecomment-1958613193

   
   ## CI report:
   
   * 67fb0726819a76eb0121a51bd3f2b2ded14c45bb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


hanyuzheng7 opened a new pull request, #24365:
URL: https://github.com/apache/flink/pull/24365

   ## What is the purpose of the change
   This is an implementation of SPLIT
   
   
   ## Brief change log
   Splits a string into an array of substrings based on a delimiter. If the 
delimiter is not found, then the original string is returned as the only 
element in the array. If the delimiter is empty, then all characters in the 
string are split. If either, string or delimiter, are NULL, then a NULL value 
is returned.
   If the delimiter is found at the beginning or end of the string, or there 
are contiguous delimiters, then an empty space is added to the array.
   
   - Syntax
   `SPLIT(string, delimiter)`
   
   - Arguments
   string: The string need to be split
   delimiter: Splits a string into an array of substrings based on a delimiter
   
   - Returns
   If the delimiter is not found, then the original string is returned as the 
only element in the array. If the delimiter is empty, then all characters in 
the string are split. If either, string or delimiter, are NULL, then a NULL 
value is returned.
   
   - Examples
   ```
   SELECT SPLIT('abcdefg', 'c');
   Result: ['ab', 'defg']
   ```
   
   - See also
   1. ksqlDB Split function
   ksqlDB provides a scalar function named SPLIT which splits a string into an 
array of substrings based on a delimiter.
   Syntax: SPLIT(string, delimiter)
   For example: SPLIT('a,b,c', ',') will return ['a', 'b', 'c'].
   
https://docs.ksqldb.io/en/0.8.1-ksqldb/developer-guide/ksqldb-reference/scalar-functions/#split
   
   2. Apache Hive Split function
   Hive offers a function named split which splits a string around a specified 
delimiter and returns an array of strings.
   Syntax: array split(string str, string pat)
   For example: split('a,b,c', ',') will return ["a", "b", "c"].
   https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
   
   3. Spark SQL Split function
   Spark SQL also offers a function named split, similar to the one in Hive.
   Syntax: split(str, pattern[, limit])
   Here, limit is an optional parameter to specify the maximum length of the 
returned array.
   For example: split('oneAtwoBthreeC', '[ABC]', 2) will return ["one", 
"twoBthreeC"].
   https://spark.apache.org/docs/latest/api/sql/index.html#split
   
   4. Presto Split function
   Presto offers a function named split which splits a string around a regular 
expression and returns an array of strings.
   Syntax: array split(string str, string regex)
   For example: split('a.b.c', '\.') will return ["a", "b", "c"].
   https://prestodb.io/docs/current/functions/string.html
   
   ## Verifying this change
   This change added tests in CollectionFunctionsITCase.
   
   ## Does this pull request potentially affect one of the following parts:  
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32706][table] Add built-in SPLIT_STRING function [flink]

2024-02-21 Thread via GitHub


hanyuzheng7 closed pull request #23108: [FLINK-32706][table] Add built-in 
SPLIT_STRING function
URL: https://github.com/apache/flink/pull/23108


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33817][flink-protobuf] Set ReadDefaultValues=False by default in proto3 for performance improvement [flink]

2024-02-21 Thread via GitHub


libenchao commented on code in PR #24035:
URL: https://github.com/apache/flink/pull/24035#discussion_r1498571218


##
docs/content/docs/connectors/table/formats/protobuf.md:
##
@@ -149,9 +149,12 @@ Format Options
   false
   Boolean
   
-  This option only works if the generated class's version is proto2. 
If this value is set to true, the format will read empty values as the default 
values defined in the proto file.
+  If this value is set to true, the format will read empty values as 
the default values defined in the proto file.
   If the value is set to false, the format will generate null values 
if the data element does not exist in the binary protobuf message.
-  If the proto syntax is proto3, this value will forcibly be set to 
true, because proto3's standard is to use default values.
+  If the proto syntax is proto3, this value needs to be set to true by 
users when using proto version older than 3.15 because proto3 

Review Comment:
   @sharath1709 Thanks for the reminder, this is indeed a breaking change, we 
can add a release note to talk about it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29122][core] Improve robustness of FileUtils.expandDirectory() [flink]

2024-02-21 Thread via GitHub


anupamaggarwal closed pull request #24307: [FLINK-29122][core] Improve 
robustness of FileUtils.expandDirectory()
URL: https://github.com/apache/flink/pull/24307


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34492][table] fix javadoc of comments. [flink]

2024-02-21 Thread via GitHub


flinkbot commented on PR #24364:
URL: https://github.com/apache/flink/pull/24364#issuecomment-1958589343

   
   ## CI report:
   
   * 591213a1a83e4fa146e8e449229962ebd4879e6b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34492) fix scala style comment link when migrate scala to java

2024-02-21 Thread Jacky Lau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Lau updated FLINK-34492:
--
Description: 
 

scala [[org.apache.calcite.rel.rules.CalcMergeRule]]

java  {@link org.apache.calcite.rel.rules.CalcMergeRule}

> fix scala style comment link when migrate scala to java
> ---
>
> Key: FLINK-34492
> URL: https://issues.apache.org/jira/browse/FLINK-34492
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
>  
> scala [[org.apache.calcite.rel.rules.CalcMergeRule]]
> java  {@link org.apache.calcite.rel.rules.CalcMergeRule}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34492) fix scala style comment link when migrate scala to java

2024-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34492:
---
Labels: pull-request-available  (was: )

> fix scala style comment link when migrate scala to java
> ---
>
> Key: FLINK-34492
> URL: https://issues.apache.org/jira/browse/FLINK-34492
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34492][table] fix javadoc of comments. [flink]

2024-02-21 Thread via GitHub


liuyongvs opened a new pull request, #24364:
URL: https://github.com/apache/flink/pull/24364

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34492) fix scala style comment link when migrate scala to java

2024-02-21 Thread Jacky Lau (Jira)
Jacky Lau created FLINK-34492:
-

 Summary: fix scala style comment link when migrate scala to java
 Key: FLINK-34492
 URL: https://issues.apache.org/jira/browse/FLINK-34492
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.20.0
Reporter: Jacky Lau
 Fix For: 1.20.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-29802][state] Changelog supports native savepoint [flink]

2024-02-21 Thread via GitHub


masteryhx commented on PR #22744:
URL: https://github.com/apache/flink/pull/22744#issuecomment-1958565787

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34458][checkpointing] Rename options for Generalized incremental checkpoints (changelog) [flink]

2024-02-21 Thread via GitHub


Zakelly commented on PR #24324:
URL: https://github.com/apache/flink/pull/24324#issuecomment-1958564217

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34457) Rename options for latency tracking

2024-02-21 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-34457.
--
Resolution: Fixed

merged 6be297d1 into master.

> Rename options for latency tracking
> ---
>
> Key: FLINK-34457
> URL: https://issues.apache.org/jira/browse/FLINK-34457
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34457][state] Rename options for latency tracking [flink]

2024-02-21 Thread via GitHub


masteryhx closed pull request #24323: [FLINK-34457][state] Rename options for 
latency tracking
URL: https://github.com/apache/flink/pull/24323


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34383) Modify the comment with incorrect syntax

2024-02-21 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-34383.
---
Resolution: Won't Fix

> Modify the comment with incorrect syntax
> 
>
> Key: FLINK-34383
> URL: https://issues.apache.org/jira/browse/FLINK-34383
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: li you
>Priority: Major
>  Labels: pull-request-available
>
> There is an error in the syntax of the comment for the class 
> PermanentBlobCache



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-21 Thread via GitHub


zhuzhurk closed pull request #24273: [FLINK-34383][Documentation] Modify the 
comment with incorrect syntax
URL: https://github.com/apache/flink/pull/24273


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25537] [JUnit5 Migration] Module: flink-core package api-common [flink]

2024-02-21 Thread via GitHub


1996fanrui commented on code in PR #23960:
URL: https://github.com/apache/flink/pull/23960#discussion_r1498532976


##
flink-core/src/test/java/org/apache/flink/api/common/state/StateDescriptorTest.java:
##
@@ -247,22 +239,21 @@ public void go() {
 for (CheckedThread t : threads) {
 t.sync();
 }
-assertEquals(
-"Should use only one serializer but actually: " + serializers,
-1,
-serializers.size());
+assertThat(serializers.size())

Review Comment:
   hasSize directly?



##
flink-core/src/test/java/org/apache/flink/api/common/typeutils/CompositeTypeSerializerUtilTest.java:
##
@@ -23,22 +23,23 @@
 import 
org.apache.flink.testutils.migration.SchemaCompatibilityTestingSerializer;
 import 
org.apache.flink.testutils.migration.SchemaCompatibilityTestingSerializer.SchemaCompatibilityTestingSnapshot;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
 
 import java.util.Arrays;
 
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 import static org.junit.Assert.assertArrayEquals;

Review Comment:
   Could `import static org.junit.Assert.assertArrayEquals;` be replace with 
`org.assertj.core.api.Assertions.assertThat`?



##
flink-core/src/test/java/org/apache/flink/api/common/io/RichInputFormatTest.java:
##
@@ -21,37 +21,35 @@
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.TaskInfo;
 import org.apache.flink.api.common.TaskInfoImpl;
-import org.apache.flink.api.common.accumulators.Accumulator;
 import org.apache.flink.api.common.functions.util.RuntimeUDFContext;
-import org.apache.flink.core.fs.Path;
 import org.apache.flink.metrics.groups.UnregisteredMetricsGroup;
 import org.apache.flink.types.Value;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
 
 import java.util.HashMap;
-import java.util.concurrent.Future;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests runtime context access from inside an RichInputFormat class. */
-public class RichInputFormatTest {
+class RichInputFormatTest {
 
 @Test
-public void testCheckRuntimeContextAccess() {
-final SerializedInputFormat inputFormat = new 
SerializedInputFormat();
+void testCheckRuntimeContextAccess() {
+final SerializedInputFormat inputFormat = new 
SerializedInputFormat<>();
 final TaskInfo taskInfo = new TaskInfoImpl("test name", 3, 1, 3, 0);
 inputFormat.setRuntimeContext(
 new RuntimeUDFContext(
 taskInfo,
 getClass().getClassLoader(),
 new ExecutionConfig(),
-new HashMap>(),
-new HashMap>(),
+new HashMap<>(),
+new HashMap<>(),
 UnregisteredMetricsGroup.createOperatorMetricGroup()));
 
-
assertEquals(inputFormat.getRuntimeContext().getTaskInfo().getIndexOfThisSubtask(),
 1);
-assertEquals(
-
inputFormat.getRuntimeContext().getTaskInfo().getNumberOfParallelSubtasks(), 3);
+
assertThat(inputFormat.getRuntimeContext().getTaskInfo().getIndexOfThisSubtask())
+.isEqualTo(1);

Review Comment:
   ```suggestion
   .isOne();
   ```



##
flink-core/src/test/java/org/apache/flink/api/common/io/RichOutputFormatTest.java:
##
@@ -21,38 +21,36 @@
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.TaskInfo;
 import org.apache.flink.api.common.TaskInfoImpl;
-import org.apache.flink.api.common.accumulators.Accumulator;
 import org.apache.flink.api.common.functions.util.RuntimeUDFContext;
-import org.apache.flink.core.fs.Path;
 import org.apache.flink.metrics.groups.UnregisteredMetricsGroup;
 import org.apache.flink.types.Value;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
 
 import java.util.HashMap;
-import java.util.concurrent.Future;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests runtime context access from inside an RichOutputFormat class. */
-public class RichOutputFormatTest {
+class RichOutputFormatTest {
 
 @Test
-public void testCheckRuntimeContextAccess() {
-final SerializedOutputFormat inputFormat = new 
SerializedOutputFormat();
+void testCheckRuntimeContextAccess() {
+final SerializedOutputFormat inputFormat = new 
SerializedOutputFormat<>();
 final TaskInfo taskInfo = new TaskInfoImpl("test name", 3, 1, 3, 0);
 
 inputFormat.setRuntimeContext(
 new RuntimeUDFContext(
 taskInfo,
 getClass().getClassLoader(),
 new 

Re: [PR] [FLINK-34481][table] Migrate SetOpRewriteUtil to java [flink]

2024-02-21 Thread via GitHub


liuyongvs commented on code in PR #24358:
URL: https://github.com/apache/flink/pull/24358#discussion_r1498518405


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/utils/SetOpRewriteUtil.java:
##
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.utils;
+
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.planner.calcite.FlinkRelBuilder;
+import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction;
+
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.util.Util;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/** Util class that rewrite [[SetOp]]. */

Review Comment:
   done @JingGe ,thanks for your review very much



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34471) Tune network memory as part of Autoscaler Memory Tuning

2024-02-21 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819443#comment-17819443
 ] 

Rui Fan commented on FLINK-34471:
-

Assuming ALL_TO_ALL in the first version is fine for me.

Also, if all of you don't mind, I can support POINT_TO_POINT in the next 
version.

> Tune network memory as part of Autoscaler Memory Tuning
> ---
>
> Key: FLINK-34471
> URL: https://issues.apache.org/jira/browse/FLINK-34471
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Maximilian Michels
>Priority: Major
>
> Design doc: 
> https://docs.google.com/document/d/19HYamwMaYYYOeH3NRbk6l9P-bBLBfgzMYjfGEPWEbeo/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-29802][state] Changelog supports native savepoint [flink]

2024-02-21 Thread via GitHub


masteryhx commented on PR #22744:
URL: https://github.com/apache/flink/pull/22744#issuecomment-1958500605

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34491) Move from experimental support to production support for Java 17

2024-02-21 Thread Dhruv Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhruv Patel updated FLINK-34491:

Description: This task is to move away from experimental support for Java 
17 to production support so that teams running Flink in production can migrate 
to Java 17 successfully  (was: This task is to move away from experimental 
support for Java 17 to production support so that teams running Flink in 
production can migrate to Java 17 successfully

 

https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/java_compatibility/#untested-flink-features-1)

> Move from experimental support to production support for Java 17
> 
>
> Key: FLINK-34491
> URL: https://issues.apache.org/jira/browse/FLINK-34491
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.1
>Reporter: Dhruv Patel
>Priority: Major
>
> This task is to move away from experimental support for Java 17 to production 
> support so that teams running Flink in production can migrate to Java 17 
> successfully



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34491) Move from experimental support to production support for Java 17

2024-02-21 Thread Dhruv Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhruv Patel updated FLINK-34491:

Description: 
This task is to move away from experimental support for Java 17 to production 
support so that teams running Flink in production can migrate to Java 17 
successfully

 

https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/java_compatibility/#untested-flink-features-1

  was:This task is to move away from experimental support for Java 17 to 
production support so that teams running Flink in production can migrate to 
Java 17 successfully


> Move from experimental support to production support for Java 17
> 
>
> Key: FLINK-34491
> URL: https://issues.apache.org/jira/browse/FLINK-34491
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.1
>Reporter: Dhruv Patel
>Priority: Major
>
> This task is to move away from experimental support for Java 17 to production 
> support so that teams running Flink in production can migrate to Java 17 
> successfully
>  
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/java_compatibility/#untested-flink-features-1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34491) Move from experimental support to production support for Java 17

2024-02-21 Thread Dhruv Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhruv Patel updated FLINK-34491:

Description: This task is to move away from experimental support for Java 
17 to production support so that teams running Flink in production can migrate 
to Java 17 successfully  (was: Tracking task to move away from experimental 
support for Java 17 to production support so that teams running Flink in 
production can migrate to Java 17 successfully)

> Move from experimental support to production support for Java 17
> 
>
> Key: FLINK-34491
> URL: https://issues.apache.org/jira/browse/FLINK-34491
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.1
>Reporter: Dhruv Patel
>Priority: Major
>
> This task is to move away from experimental support for Java 17 to production 
> support so that teams running Flink in production can migrate to Java 17 
> successfully



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34265) Add the doc of named parameters

2024-02-21 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34265:
---

Assignee: Feng Jin

> Add the doc of named parameters
> ---
>
> Key: FLINK-34265
> URL: https://issues.apache.org/jira/browse/FLINK-34265
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Planner
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34491) Move from experimental support to production support for Java 17

2024-02-21 Thread Dhruv Patel (Jira)
Dhruv Patel created FLINK-34491:
---

 Summary: Move from experimental support to production support for 
Java 17
 Key: FLINK-34491
 URL: https://issues.apache.org/jira/browse/FLINK-34491
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.18.1
Reporter: Dhruv Patel


Tracking task to move away from experimental support for Java 17 to production 
support so that teams running Flink in production can migrate to Java 17 
successfully



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33728) Do not rewatch when KubernetesResourceManagerDriver watch fail

2024-02-21 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819433#comment-17819433
 ] 

Xintong Song edited comment on FLINK-33728 at 2/22/24 1:21 AM:
---

master (1.20): e7e31a99d6f93d4dadda21fbd1ebee079fe2418e


was (Author: xintongsong):
master (1.19): e7e31a99d6f93d4dadda21fbd1ebee079fe2418e

> Do not rewatch when KubernetesResourceManagerDriver watch fail
> --
>
> Key: FLINK-33728
> URL: https://issues.apache.org/jira/browse/FLINK-33728
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: xiaogang zhou
>Assignee: xiaogang zhou
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> I met massive production problem when kubernetes ETCD slow responding happen. 
> After Kube recoverd after 1 hour, Thousands of Flink jobs using 
> kubernetesResourceManagerDriver rewatched when recieving 
> ResourceVersionTooOld,  which caused great pressure on API Server and made 
> API server failed again... 
>  
> I am not sure is it necessary to
> getResourceEventHandler().onError(throwable)
> in  PodCallbackHandlerImpl# handleError method?
>  
> We can just neglect the disconnection of watching process. and try to rewatch 
> once new requestResource called. And we can leverage on the akka heartbeat 
> timeout to discover the TM failure, just like YARN mode do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33728) Do not rewatch when KubernetesResourceManagerDriver watch fail

2024-02-21 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-33728.

Fix Version/s: 1.19.0
   Resolution: Done

master (1.19): e7e31a99d6f93d4dadda21fbd1ebee079fe2418e

> Do not rewatch when KubernetesResourceManagerDriver watch fail
> --
>
> Key: FLINK-33728
> URL: https://issues.apache.org/jira/browse/FLINK-33728
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: xiaogang zhou
>Assignee: xiaogang zhou
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> I met massive production problem when kubernetes ETCD slow responding happen. 
> After Kube recoverd after 1 hour, Thousands of Flink jobs using 
> kubernetesResourceManagerDriver rewatched when recieving 
> ResourceVersionTooOld,  which caused great pressure on API Server and made 
> API server failed again... 
>  
> I am not sure is it necessary to
> getResourceEventHandler().onError(throwable)
> in  PodCallbackHandlerImpl# handleError method?
>  
> We can just neglect the disconnection of watching process. and try to rewatch 
> once new requestResource called. And we can leverage on the akka heartbeat 
> timeout to discover the TM failure, just like YARN mode do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33728) Do not rewatch when KubernetesResourceManagerDriver watch fail

2024-02-21 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-33728:
-
Fix Version/s: 1.20.0
   (was: 1.19.0)

> Do not rewatch when KubernetesResourceManagerDriver watch fail
> --
>
> Key: FLINK-33728
> URL: https://issues.apache.org/jira/browse/FLINK-33728
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: xiaogang zhou
>Assignee: xiaogang zhou
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> I met massive production problem when kubernetes ETCD slow responding happen. 
> After Kube recoverd after 1 hour, Thousands of Flink jobs using 
> kubernetesResourceManagerDriver rewatched when recieving 
> ResourceVersionTooOld,  which caused great pressure on API Server and made 
> API server failed again... 
>  
> I am not sure is it necessary to
> getResourceEventHandler().onError(throwable)
> in  PodCallbackHandlerImpl# handleError method?
>  
> We can just neglect the disconnection of watching process. and try to rewatch 
> once new requestResource called. And we can leverage on the akka heartbeat 
> timeout to discover the TM failure, just like YARN mode do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33728] Do not rewatch when KubernetesResourceManagerDriver wat… [flink]

2024-02-21 Thread via GitHub


xintongsong closed pull request #24163: [FLINK-33728] Do not rewatch when 
KubernetesResourceManagerDriver wat…
URL: https://github.com/apache/flink/pull/24163


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33817][flink-protobuf] Set ReadDefaultValues=False by default in proto3 for performance improvement [flink]

2024-02-21 Thread via GitHub


sharath1709 commented on code in PR #24035:
URL: https://github.com/apache/flink/pull/24035#discussion_r1498451041


##
docs/content/docs/connectors/table/formats/protobuf.md:
##
@@ -149,9 +149,12 @@ Format Options
   false
   Boolean
   
-  This option only works if the generated class's version is proto2. 
If this value is set to true, the format will read empty values as the default 
values defined in the proto file.
+  If this value is set to true, the format will read empty values as 
the default values defined in the proto file.
   If the value is set to false, the format will generate null values 
if the data element does not exist in the binary protobuf message.
-  If the proto syntax is proto3, this value will forcibly be set to 
true, because proto3's standard is to use default values.
+  If the proto syntax is proto3, this value needs to be set to true by 
users when using proto version older than 3.15 because proto3 

Review Comment:
   @libenchao In case you haven't noticed, I'd also like to point out that this 
is a breaking change as older SQL queries without proper null checking can fail 
after this change. With that being said, this is a step in the right direction 
by making the default value to be the better performance choise



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33817][flink-protobuf] Set ReadDefaultValues=False by default in proto3 for performance improvement [flink]

2024-02-21 Thread via GitHub


sharath1709 commented on code in PR #24035:
URL: https://github.com/apache/flink/pull/24035#discussion_r1498449123


##
docs/content/docs/connectors/table/formats/protobuf.md:
##
@@ -149,9 +149,12 @@ Format Options
   false
   Boolean
   
-  This option only works if the generated class's version is proto2. 
If this value is set to true, the format will read empty values as the default 
values defined in the proto file.
+  If this value is set to true, the format will read empty values as 
the default values defined in the proto file.
   If the value is set to false, the format will generate null values 
if the data element does not exist in the binary protobuf message.
-  If the proto syntax is proto3, this value will forcibly be set to 
true, because proto3's standard is to use default values.
+  If the proto syntax is proto3, this value needs to be set to true by 
users when using proto version older than 3.15 because proto3 

Review Comment:
   Ack, Updated the content to align with `PbFormatOptions`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.postgresql:postgresql from 42.5.4 to 42.6.1 in /flink-autoscaler-plugin-jdbc [flink-kubernetes-operator]

2024-02-21 Thread via GitHub


dependabot[bot] commented on PR #779:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/779#issuecomment-1958364283

   Superseded by #780.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.postgresql:postgresql from 42.5.4 to 42.6.1 in /flink-autoscaler-plugin-jdbc [flink-kubernetes-operator]

2024-02-21 Thread via GitHub


dependabot[bot] closed pull request #779: Bump org.postgresql:postgresql from 
42.5.4 to 42.6.1 in /flink-autoscaler-plugin-jdbc
URL: https://github.com/apache/flink-kubernetes-operator/pull/779


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.postgresql:postgresql from 42.5.4 to 42.5.5 in /flink-autoscaler-plugin-jdbc [flink-kubernetes-operator]

2024-02-21 Thread via GitHub


dependabot[bot] opened a new pull request, #780:
URL: https://github.com/apache/flink-kubernetes-operator/pull/780

   Bumps [org.postgresql:postgresql](https://github.com/pgjdbc/pgjdbc) from 
42.5.4 to 42.5.5.
   
   Changelog
   Sourced from https://github.com/pgjdbc/pgjdbc/blob/master/CHANGELOG.md;>org.postgresql:postgresql's
 changelog.
   
   Changelog
   Notable changes since version 42.0.0, read the complete https://jdbc.postgresql.org/documentation/changelog.html;>History of 
Changes.
   The format is based on http://keepachangelog.com/en/1.0.0/;>Keep 
a Changelog.
   [Unreleased]
   Changed
   Added
   Fixed
   [42.7.2] (2024-02-21 08:23:00 -0500)
   Security
   
   security: SQL Injection via line comment generation, it is possible in 
SimpleQuery mode to generate a line comment by having a 
placeholder for a numeric with a -
   such as -?. There must be second placeholder for a string 
immediately after. Setting the parameter to a -ve value creates a line comment.
   This has been fixed in this version fixes https://www.cve.org/CVERecord?id=CVE-2024-1597;>CVE-2024-1597. 
Reported by https://github.com/paul-gerste-sonarsource;>Paul 
Gerste. See the https://github.com/pgjdbc/pgjdbc/security/advisories/GHSA-24rp-q3w6-vc56;>security
 advisory for more details. This has been fixed in versions 42.7.2, 42.6.1 
42.5.5, 42.4.4, 42.3.9, 42.2.28.jre7. See the security advisory for work 
arounds.
   
   Changed
   
   fix: Use simple query for isValid. Using Extended query sends two 
messages checkConnectionQuery was never ever set or used, removed [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3101;>#3101](https://redirect.github.com/pgjdbc/pgjdbc/pull/3101;>pgjdbc/pgjdbc#3101)
   perf: Avoid autoboxing bind indexes by https://github.com/bokken;>@​bokken in [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/1244;>#1244](https://redirect.github.com/pgjdbc/pgjdbc/pull/1244;>pgjdbc/pgjdbc#1244)
   refactor: Document that encodePassword will zero out the password array, 
and remove driver's default encodePassword by https://github.com/vlsi;>@​vlsi in [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3084;>#3084](https://redirect.github.com/pgjdbc/pgjdbc/pull/3084;>pgjdbc/pgjdbc#3084)
   
   Added
   
   feat: Add PasswordUtil for encrypting passwords client side [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3082;>#3082](https://redirect.github.com/pgjdbc/pgjdbc/pull/3082;>pgjdbc/pgjdbc#3082)
   
   [42.7.1] (2023-12-06 08:34:00 -0500)
   Changed
   
   perf: improve performance of PreparedStatement.setBlob, BlobInputStream, 
and BlobOutputStream with dynamic buffer sizing [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3044;>#3044](https://redirect.github.com/pgjdbc/pgjdbc/pull/3044;>pgjdbc/pgjdbc#3044)
   
   Fixed
   
   fix: Apply connectTimeout before SSLSocket.startHandshake to avoid 
infinite wait in case the connection is broken [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3040;>#3040](https://redirect.github.com/pgjdbc/pgjdbc/pull/3040;>pgjdbc/pgjdbc#3040)
   fix: support waffle-jna 2.x and 3.x by using reflective approach for 
ManagedSecBufferDesc [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/2720;>#2720](https://redirect.github.com/pgjdbc/pgjdbc/pull/2720;>pgjdbc/pgjdbc#2720)
 Fixes [Issue https://redirect.github.com/pgjdbc/pgjdbc/issues/2690;>#2690](https://redirect.github.com/pgjdbc/pgjdbc/issues/2720;>pgjdbc/pgjdbc#2720).
   fix: NoSuchMethodError on ByteBuffer#position When Running on Java 8  
when accessing arrays, fixes [Issue https://redirect.github.com/pgjdbc/pgjdbc/issues/3014;>#3014](https://redirect.github.com/pgjdbc/pgjdbc/issues/3014;>pgjdbc/pgjdbc#3014)
   Revert [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/2925;>#2925](https://redirect.github.com/pgjdbc/pgjdbc/pull/2925;>pgjdbc/pgjdbc#2925)
 Use canonical DateStyle name [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3035;>#3035](https://redirect.github.com/pgjdbc/pgjdbc/pull/3035;>pgjdbc/pgjdbc#3035)
   Fixes  [Issue https://redirect.github.com/pgjdbc/pgjdbc/issues/3008;>#3008](https://redirect.github.com/pgjdbc/pgjdbc/issues/3008;>pgjdbc/pgjdbc#3008)
   Revert [PR #https://redirect.github.com/pgjdbc/pgjdbc/issues/2973;>#2973](https://redirect.github.com/pgjdbc/pgjdbc/pull/2973;>pgjdbc/pgjdbc#2973)
 feat: support SET statements combining with other queries with semicolon in 
PreparedStatement [PR https://redirect.github.com/pgjdbc/pgjdbc/issues/3010;>#3010](https://redirect.github.com/pgjdbc/pgjdbc/pull/3010;>pgjdbc/pgjdbc#3010)
   Fixes [Issue https://redirect.github.com/pgjdbc/pgjdbc/issues/3007;>#3007](https://redirect.github.com/pgjdbc/pgjdbc/issues/3007;>pgjdbc/pgjdbc#3007)
   fix: avoid timezone conversions when sending LocalDateTime to the 
database https://redirect.github.com/pgjdbc/pgjdbc/pull/3010;>#2852  Fixes 
[Issue https://redirect.github.com/pgjdbc/pgjdbc/issues/1390;>#1390](https://redirect.github.com/pgjdbc/pgjdbc/issues/1390;>pgjdbc/pgjdbc#1390)
   ,[Issue 

[jira] [Comment Edited] (FLINK-34451) [Kubernetes Operator] Job with restarting TaskManagers uses wrong/misleading fallback approach

2024-02-21 Thread Alex Hoffer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819410#comment-17819410
 ] 

Alex Hoffer edited comment on FLINK-34451 at 2/21/24 11:11 PM:
---

[~gyfora] 
 # Here is my FlinkDeployment:

{code:java}
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: kafka
  namespace: flink
spec:
  image: [redacted]
  flinkVersion: v1_18
  restartNonce: 25
  flinkConfiguration:
taskmanager.rpc.port: "50100"
taskmanager.numberOfTaskSlots: "1"
blob.server.port: "6124"
jobmanager.memory.process.size: "null"
taskmanager.memory.process.size: "2gb"
high-availability.type: kubernetes
high-availability.storageDir: 
abfss://job-result-store@[redacted].dfs.core.windows.net/kafka

state.checkpoints.dir: 
abfss://checkpoints@[redacted].dfs.core.windows.net/kafka
execution.checkpointing.interval: "3"
execution.checkpointing.mode: EXACTLY_ONCE
state.checkpoint-storage: filesystem

state.savepoints.dir: 
abfss://savepoints@[redacted].dfs.core.windows.net/kafka

state.backend.type: rocksdb
state.backend.incremental: "true"
state.backend.rocksdb.localdir: /rocksdb

fs.azure.account.auth.type: OAuth
fs.azure.account.oauth.provider.type: 
org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.endpoint: [redacted]
fs.azure.account.oauth2.client.id: [redacted]

# Fix bug with hadoop azure that buffers checkpoint blocks in disk rather 
than memory https://issues.apache.org/jira/browse/HADOOP-18707
fs.azure.data.blocks.buffer: array

restart-strategy.type: exponentialdelay
job.autoscaler.enabled: "true"
job.autoscaler.stabilization.interval: 2m
job.autoscaler.metrics.window: 1m
job.autoscaler.target.utilization: "0.6"
job.autoscaler.target.utilization.boundary: "0.2"
job.autoscaler.restart.time: 1m
job.autoscaler.catch-up.duration: 1m
job.autoscaler.scale-up.grace-period: 10m
jobmanager.scheduler: adaptive
pipeline.max-parallelism: "12"
job.autoscaler.vertex.max-parallelism: "5"

  serviceAccount: flink
  jobManager:
replicas: 2
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - jobmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 1Gi
  requests:
ephemeral-storage: 1Gi
  taskManager:
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - taskmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 2Gi
  requests:
ephemeral-storage: 2Gi
volumeMounts:
  - mountPath: /rocksdb
name: rocksdb
volumes:
  - name: rocksdb
emptyDir:
  sizeLimit: 1Gi
  podTemplate:
spec:
  containers:
- name: flink-main-container
  ports:
- containerPort: 9250
  name: metrics
  protocol: TCP
  job:
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/bin/python", "-py", 
"/opt/flink/usrlib/kafka-k6.py", "--kubernetes", "--fivemin_bytessent_stream", 
"--kafka_bootstrap_ip", "10.177.1.26"]
upgradeMode: savepoint
parallelism: 1{code}
      2. Yes, I can recreate this scenario each time I try.

      3. My ticket was mistaken, this was found on operator version 1.7.0 (I 
will update the ticket). I just recreated it on the latest Flink Operator image 
available (37ca517).

      4. Just confirmed it occurs on Flink 1.17.0

      5. Did not occur when adaptive scheduler was turned off!

In scenario 5 above, the 

[jira] [Comment Edited] (FLINK-34451) [Kubernetes Operator] Job with restarting TaskManagers uses wrong/misleading fallback approach

2024-02-21 Thread Alex Hoffer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819410#comment-17819410
 ] 

Alex Hoffer edited comment on FLINK-34451 at 2/21/24 11:10 PM:
---

 
 # Here is my FlinkDeployment:

{code:java}
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: kafka
  namespace: flink
spec:
  image: [redacted]
  flinkVersion: v1_18
  restartNonce: 25
  flinkConfiguration:
taskmanager.rpc.port: "50100"
taskmanager.numberOfTaskSlots: "1"
blob.server.port: "6124"
jobmanager.memory.process.size: "null"
taskmanager.memory.process.size: "2gb"
high-availability.type: kubernetes
high-availability.storageDir: 
abfss://job-result-store@[redacted].dfs.core.windows.net/kafka

state.checkpoints.dir: 
abfss://checkpoints@[redacted].dfs.core.windows.net/kafka
execution.checkpointing.interval: "3"
execution.checkpointing.mode: EXACTLY_ONCE
state.checkpoint-storage: filesystem

state.savepoints.dir: 
abfss://savepoints@[redacted].dfs.core.windows.net/kafka

state.backend.type: rocksdb
state.backend.incremental: "true"
state.backend.rocksdb.localdir: /rocksdb

fs.azure.account.auth.type: OAuth
fs.azure.account.oauth.provider.type: 
org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.endpoint: [redacted]
fs.azure.account.oauth2.client.id: [redacted]

# Fix bug with hadoop azure that buffers checkpoint blocks in disk rather 
than memory https://issues.apache.org/jira/browse/HADOOP-18707
fs.azure.data.blocks.buffer: array

restart-strategy.type: exponentialdelay
job.autoscaler.enabled: "true"
job.autoscaler.stabilization.interval: 2m
job.autoscaler.metrics.window: 1m
job.autoscaler.target.utilization: "0.6"
job.autoscaler.target.utilization.boundary: "0.2"
job.autoscaler.restart.time: 1m
job.autoscaler.catch-up.duration: 1m
job.autoscaler.scale-up.grace-period: 10m
jobmanager.scheduler: adaptive
pipeline.max-parallelism: "12"
job.autoscaler.vertex.max-parallelism: "5"

  serviceAccount: flink
  jobManager:
replicas: 2
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - jobmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 1Gi
  requests:
ephemeral-storage: 1Gi
  taskManager:
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - taskmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 2Gi
  requests:
ephemeral-storage: 2Gi
volumeMounts:
  - mountPath: /rocksdb
name: rocksdb
volumes:
  - name: rocksdb
emptyDir:
  sizeLimit: 1Gi
  podTemplate:
spec:
  containers:
- name: flink-main-container
  ports:
- containerPort: 9250
  name: metrics
  protocol: TCP
  job:
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/bin/python", "-py", 
"/opt/flink/usrlib/kafka-k6.py", "--kubernetes", "--fivemin_bytessent_stream", 
"--kafka_bootstrap_ip", "10.177.1.26"]
upgradeMode: savepoint
parallelism: 1{code}
      2. Yes, I can recreate this scenario each time I try.

      3. My ticket was mistaken, this was found on operator version 1.7.0 (I 
will update the ticket). I just recreated it on the latest Flink Operator image 
available (37ca517).

      4. Just confirmed it occurs on Flink 1.17.0

      5. Did not occur when adaptive scheduler was turned off!

In scenario 5 above, the job 

[jira] [Comment Edited] (FLINK-34451) [Kubernetes Operator] Job with restarting TaskManagers uses wrong/misleading fallback approach

2024-02-21 Thread Alex Hoffer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819410#comment-17819410
 ] 

Alex Hoffer edited comment on FLINK-34451 at 2/21/24 11:10 PM:
---

[~gyfora] 
 # Here is my FlinkDeployment:

{code:java}
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: kafka
  namespace: flink
spec:
  image: [redacted]
  flinkVersion: v1_18
  restartNonce: 25
  flinkConfiguration:
taskmanager.rpc.port: "50100"
taskmanager.numberOfTaskSlots: "1"
blob.server.port: "6124"
jobmanager.memory.process.size: "null"
taskmanager.memory.process.size: "2gb"
high-availability.type: kubernetes
high-availability.storageDir: 
abfss://job-result-store@[redacted].dfs.core.windows.net/kafka

state.checkpoints.dir: 
abfss://checkpoints@[redacted].dfs.core.windows.net/kafka
execution.checkpointing.interval: "3"
execution.checkpointing.mode: EXACTLY_ONCE
state.checkpoint-storage: filesystem

state.savepoints.dir: 
abfss://savepoints@[redacted].dfs.core.windows.net/kafka

state.backend.type: rocksdb
state.backend.incremental: "true"
state.backend.rocksdb.localdir: /rocksdb

fs.azure.account.auth.type: OAuth
fs.azure.account.oauth.provider.type: 
org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.endpoint: [redacted]
fs.azure.account.oauth2.client.id: [redacted]

# Fix bug with hadoop azure that buffers checkpoint blocks in disk rather 
than memory https://issues.apache.org/jira/browse/HADOOP-18707
fs.azure.data.blocks.buffer: array

restart-strategy.type: exponentialdelay
job.autoscaler.enabled: "true"
job.autoscaler.stabilization.interval: 2m
job.autoscaler.metrics.window: 1m
job.autoscaler.target.utilization: "0.6"
job.autoscaler.target.utilization.boundary: "0.2"
job.autoscaler.restart.time: 1m
job.autoscaler.catch-up.duration: 1m
job.autoscaler.scale-up.grace-period: 10m
jobmanager.scheduler: adaptive
pipeline.max-parallelism: "12"
job.autoscaler.vertex.max-parallelism: "5"

  serviceAccount: flink
  jobManager:
replicas: 2
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - jobmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 1Gi
  requests:
ephemeral-storage: 1Gi
  taskManager:
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - taskmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 2Gi
  requests:
ephemeral-storage: 2Gi
volumeMounts:
  - mountPath: /rocksdb
name: rocksdb
volumes:
  - name: rocksdb
emptyDir:
  sizeLimit: 1Gi
  podTemplate:
spec:
  containers:
- name: flink-main-container
  ports:
- containerPort: 9250
  name: metrics
  protocol: TCP
  job:
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/bin/python", "-py", 
"/opt/flink/usrlib/kafka-k6.py", "--kubernetes", "--fivemin_bytessent_stream", 
"--kafka_bootstrap_ip", "10.177.1.26"]
upgradeMode: savepoint
parallelism: 1{code}
      2. Yes, I can recreate this scenario each time I try.

      3. My ticket was mistaken, this was found on operator version 1.7.0 (I 
will update the ticket). I just recreated it on the latest Flink Operator image 
available (37ca517).

      4. Just confirmed it occurs on Flink 1.17.0

      5. Did not occur when adaptive scheduler was turned off!

In scenario 5 above, the 

[jira] [Commented] (FLINK-34451) [Kubernetes Operator] Job with restarting TaskManagers uses wrong/misleading fallback approach

2024-02-21 Thread Alex Hoffer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819410#comment-17819410
 ] 

Alex Hoffer commented on FLINK-34451:
-

 
 # Here is my FlinkDeployment:


{code:java}
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: kafka
  namespace: flink
spec:
  image: [redacted]
  flinkVersion: v1_18
  restartNonce: 25
  flinkConfiguration:
taskmanager.rpc.port: "50100"
taskmanager.numberOfTaskSlots: "1"
blob.server.port: "6124"
jobmanager.memory.process.size: "null"
taskmanager.memory.process.size: "2gb"
high-availability.type: kubernetes
high-availability.storageDir: 
abfss://job-result-store@[redacted].dfs.core.windows.net/kafka

state.checkpoints.dir: 
abfss://checkpoints@[redacted].dfs.core.windows.net/kafka
execution.checkpointing.interval: "3"
execution.checkpointing.mode: EXACTLY_ONCE
state.checkpoint-storage: filesystem

state.savepoints.dir: 
abfss://savepoints@[redacted].dfs.core.windows.net/kafka

state.backend.type: rocksdb
state.backend.incremental: "true"
state.backend.rocksdb.localdir: /rocksdb

fs.azure.account.auth.type: OAuth
fs.azure.account.oauth.provider.type: 
org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.endpoint: [redacted]
fs.azure.account.oauth2.client.id: [redacted]

# Fix bug with hadoop azure that buffers checkpoint blocks in disk rather 
than memory https://issues.apache.org/jira/browse/HADOOP-18707
fs.azure.data.blocks.buffer: array

restart-strategy.type: exponentialdelay
job.autoscaler.enabled: "true"
job.autoscaler.stabilization.interval: 2m
job.autoscaler.metrics.window: 1m
job.autoscaler.target.utilization: "0.6"
job.autoscaler.target.utilization.boundary: "0.2"
job.autoscaler.restart.time: 1m
job.autoscaler.catch-up.duration: 1m
job.autoscaler.scale-up.grace-period: 10m
jobmanager.scheduler: adaptive
pipeline.max-parallelism: "12"
job.autoscaler.vertex.max-parallelism: "5"

  serviceAccount: flink
  jobManager:
replicas: 2
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - jobmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 1Gi
  requests:
ephemeral-storage: 1Gi
  taskManager:
resource:
  memory: "2gb"
  cpu: 1
podTemplate:
  spec:
affinity:
  podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
  matchExpressions:
  - key: app
operator: In
values:
  - kafka
  - key: component
operator: In
values:
  - taskmanager
topologyKey: failure-domain.beta.kubernetes.io/zone
  weight: 10
containers:
  - name: flink-main-container
resources:
  limits:
ephemeral-storage: 2Gi
  requests:
ephemeral-storage: 2Gi
volumeMounts:
  - mountPath: /rocksdb
name: rocksdb
volumes:
  - name: rocksdb
emptyDir:
  sizeLimit: 1Gi
  podTemplate:
spec:
  containers:
- name: flink-main-container
  ports:
- containerPort: 9250
  name: metrics
  protocol: TCP
  job:
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/bin/python", "-py", 
"/opt/flink/usrlib/kafka-k6.py", "--kubernetes", "--fivemin_bytessent_stream", 
"--kafka_bootstrap_ip", "10.177.1.26"]
upgradeMode: savepoint
parallelism: 1{code}

 # Yes, I can recreate this scenario each time I try.
 # My ticket was mistaken, this was found on operator version 1.7.0 (I will 
update the ticket). I just recreated it on the latest Flink Operator image 
available (37ca517).
 # Just confirmed it occurs on Flink 1.17.0
 # Did not occur when adaptive scheduler was turned off!


In scenario 5 above, the job correctly flipped back to the last checkpoint. 
*This suggests it may be related 

[jira] [Updated] (FLINK-34490) flink-connector-kinesis not correctly supporting credential chaining

2024-02-21 Thread Eddie Ramirez (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eddie Ramirez updated FLINK-34490:
--
Description: 
When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does not 
correctly follow the chain of credentials.

 

*Expected Result*

 `{{{}flink-connector-kinesis{}}}`  should follow the `{{{}source_profile{}}}` 
for each respective profile in `{{{}~/.aws/config{}}}` to ultimately determine 
credentials.

 

*Observed Result*

 `{{{}flink-connector-kinesis{}}}` only follows the first matching 
`{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
out because there is no credentials for that profile.
{code:java}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
load credentials into profile [profile intermediate-role]: AWS Access Key ID is 
not specified
{code}
 

*Configuration*

connector config
{code:java}
aws.credentials.provider: PROFILE
aws.credentials.profile.name: flink-access-role{code}
 

aws `{{{}~/.aws/config{}}}` file
{code:java}
[profile flink-access-role]
role_arn = arn:aws:iam::x:role/flink-access-role
source_profile = intermediate-role

[profile intermediate-role]
role_arn = arn:aws:iam::x:role/intermediate-role
source_profile = aws-sso-role

[profile aws-sso-role]
sso_session = idc
sso_role_name = x
sso_account_id = x
credential_process = aws configure export-credentials --profile=aws-sso-role

[sso-session idc]
sso_start_url = x
sso_region = x
sso_registration_scopes = sso:account:access
{code}
 

  was:
When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does not 
correctly follow the chain of credentials.

 

*Expected Result*

 `{{{}flink-connector-kinesis{}}}`  should follow the `{{{}source_profile{}}}` 
for each respective profile in `{{{}~/.aws/config{}}}` to ultimately determine 
credentials.

 

*Observed Result*

 `{{{}flink-connector-kinesis{}}}` only follows the first matching 
`{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
out because there is no credentials for that profile.
{code:java}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
load credentials into profile [profile intermediate-role]: AWS Access Key ID is 
not specified
{code}
 

*Configuration*

connector config
{code:java}
aws.credentials.provider: PROFILE
aws.credentials.profile.name: flink-access-role{code}
 

aws `{{{}~/.aws/config{}}}` file
{code:java}
[profile flink-access-role]
role_arn = arn:aws:iam::x:role/flink-access-role
source_profile = intermediate-role
[profile intermediate-role]
role_arn = arn:aws:iam::x:role/intermediate-role
source_profile = aws-sso-role
[profile aws-sso-role]
sso_session = idc
sso_role_name = x
sso_account_id = x
credential_process = aws configure export-credentials --profile=aws-sso-role
[sso-session idc]
sso_start_url = x
sso_region = x
sso_registration_scopes = sso:account:access
{code}
 


> flink-connector-kinesis not correctly supporting credential chaining
> 
>
> Key: FLINK-34490
> URL: https://issues.apache.org/jira/browse/FLINK-34490
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: aws-connector-4.2.0, 1.17.2
>Reporter: Eddie Ramirez
>Priority: Major
> Attachments: Flink Credential Chaining.png
>
>
> When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does 
> not correctly follow the chain of credentials.
>  
> *Expected Result*
>  `{{{}flink-connector-kinesis{}}}`  should follow the 
> `{{{}source_profile{}}}` for each respective profile in 
> `{{{}~/.aws/config{}}}` to ultimately determine credentials.
>  
> *Observed Result*
>  `{{{}flink-connector-kinesis{}}}` only follows the first matching 
> `{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
> out because there is no credentials for that profile.
> {code:java}
> org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials into profile [profile intermediate-role]: AWS Access Key ID 
> is not specified
> {code}
>  
> *Configuration*
> connector config
> {code:java}
> aws.credentials.provider: PROFILE
> aws.credentials.profile.name: flink-access-role{code}
>  
> aws `{{{}~/.aws/config{}}}` file
> {code:java}
> [profile flink-access-role]
> role_arn = arn:aws:iam::x:role/flink-access-role
> source_profile = intermediate-role
> [profile intermediate-role]
> role_arn = arn:aws:iam::x:role/intermediate-role
> source_profile = aws-sso-role
> [profile aws-sso-role]
> sso_session = idc
> sso_role_name = x
> sso_account_id = x
> credential_process = aws configure export-credentials --profile=aws-sso-role

Re: [PR] [FLINK-33517] Implement restore tests for Values node [flink]

2024-02-21 Thread via GitHub


bvarghese1 commented on PR #24114:
URL: https://github.com/apache/flink/pull/24114#issuecomment-1958172227

   > lgtm % CI failure
   
   Fixed. CI has passed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34490) flink-connector-kinesis not correctly supporting credential chaining

2024-02-21 Thread Eddie Ramirez (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eddie Ramirez updated FLINK-34490:
--
Description: 
When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does not 
correctly follow the chain of credentials.

 

*Expected Result*

 `{{{}flink-connector-kinesis{}}}`  should follow the `{{{}source_profile{}}}` 
for each respective profile in `{{{}~/.aws/config{}}}` to ultimately determine 
credentials.

 

*Observed Result*

 `{{{}flink-connector-kinesis{}}}` only follows the first matching 
`{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
out because there is no credentials for that profile.
{code:java}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
load credentials into profile [profile intermediate-role]: AWS Access Key ID is 
not specified
{code}
 

*Configuration*

connector config
{code:java}
aws.credentials.provider: PROFILE
aws.credentials.profile.name: flink-access-role{code}
 

aws `{{{}~/.aws/config{}}}` file
{code:java}
[profile flink-access-role]
role_arn = arn:aws:iam::x:role/flink-access-role
source_profile = intermediate-role
[profile intermediate-role]
role_arn = arn:aws:iam::x:role/intermediate-role
source_profile = aws-sso-role
[profile aws-sso-role]
sso_session = idc
sso_role_name = x
sso_account_id = x
credential_process = aws configure export-credentials --profile=aws-sso-role
[sso-session idc]
sso_start_url = x
sso_region = x
sso_registration_scopes = sso:account:access
{code}
 

  was:
When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does not 
correctly follow the chain of credentials.


*Expected Result*

 `{{{}flink-connector-kinesis{}}}`  should follow the `{{{}source_profile{}}}` 
for each respective profile in `{{{}~/.aws/config{}}}` to ultimately determine 
credentials.


*Observed Result*

 `{{{}flink-connector-kinesis{}}}` only follows the first matching 
`{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
out because there is no credentials for that profile.


{code:java}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
load credentials into profile [profile intermediate-role]: AWS Access Key ID is 
not specified
{code}

*Configuration*

connector config

 
{code:java}
aws.credentials.provider: PROFILE
aws.credentials.profile.name: flink-access-role{code}

aws `{{{}~/.aws/config{}}}` file

 
{code:java}
[profile flink-access-role]
role_arn = arn:aws:iam::x:role/flink-access-role
source_profile = intermediate-role
[profile intermediate-role]
role_arn = arn:aws:iam::x:role/intermediate-role
source_profile = aws-sso-role
[profile aws-sso-role]
sso_session = idc
sso_role_name = x
sso_account_id = x
credential_process = aws configure export-credentials --profile=aws-sso-role
[sso-session idc]
sso_start_url = x
sso_region = x
sso_registration_scopes = sso:account:access
{code}
 

```


> flink-connector-kinesis not correctly supporting credential chaining
> 
>
> Key: FLINK-34490
> URL: https://issues.apache.org/jira/browse/FLINK-34490
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: aws-connector-4.2.0, 1.17.2
>Reporter: Eddie Ramirez
>Priority: Major
> Attachments: Flink Credential Chaining.png
>
>
> When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does 
> not correctly follow the chain of credentials.
>  
> *Expected Result*
>  `{{{}flink-connector-kinesis{}}}`  should follow the 
> `{{{}source_profile{}}}` for each respective profile in 
> `{{{}~/.aws/config{}}}` to ultimately determine credentials.
>  
> *Observed Result*
>  `{{{}flink-connector-kinesis{}}}` only follows the first matching 
> `{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
> out because there is no credentials for that profile.
> {code:java}
> org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials into profile [profile intermediate-role]: AWS Access Key ID 
> is not specified
> {code}
>  
> *Configuration*
> connector config
> {code:java}
> aws.credentials.provider: PROFILE
> aws.credentials.profile.name: flink-access-role{code}
>  
> aws `{{{}~/.aws/config{}}}` file
> {code:java}
> [profile flink-access-role]
> role_arn = arn:aws:iam::x:role/flink-access-role
> source_profile = intermediate-role
> [profile intermediate-role]
> role_arn = arn:aws:iam::x:role/intermediate-role
> source_profile = aws-sso-role
> [profile aws-sso-role]
> sso_session = idc
> sso_role_name = x
> sso_account_id = x
> credential_process = aws configure export-credentials 

[jira] [Created] (FLINK-34490) flink-connector-kinesis not correctly supporting credential chaining

2024-02-21 Thread Eddie Ramirez (Jira)
Eddie Ramirez created FLINK-34490:
-

 Summary: flink-connector-kinesis not correctly supporting 
credential chaining
 Key: FLINK-34490
 URL: https://issues.apache.org/jira/browse/FLINK-34490
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.17.2, aws-connector-4.2.0
Reporter: Eddie Ramirez
 Attachments: Flink Credential Chaining.png

When using AWS credential chaining, `{{{}flink-connector-kinesis{}}}` does not 
correctly follow the chain of credentials.


*Expected Result*

 `{{{}flink-connector-kinesis{}}}`  should follow the `{{{}source_profile{}}}` 
for each respective profile in `{{{}~/.aws/config{}}}` to ultimately determine 
credentials.


*Observed Result*

 `{{{}flink-connector-kinesis{}}}` only follows the first matching 
`{{{}source_profile{}}}` specified in `{{{}~/.aws/config{}}}` and then errors 
out because there is no credentials for that profile.


{code:java}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
load credentials into profile [profile intermediate-role]: AWS Access Key ID is 
not specified
{code}

*Configuration*

connector config

 
{code:java}
aws.credentials.provider: PROFILE
aws.credentials.profile.name: flink-access-role{code}

aws `{{{}~/.aws/config{}}}` file

 
{code:java}
[profile flink-access-role]
role_arn = arn:aws:iam::x:role/flink-access-role
source_profile = intermediate-role
[profile intermediate-role]
role_arn = arn:aws:iam::x:role/intermediate-role
source_profile = aws-sso-role
[profile aws-sso-role]
sso_session = idc
sso_role_name = x
sso_account_id = x
credential_process = aws configure export-credentials --profile=aws-sso-role
[sso-session idc]
sso_start_url = x
sso_region = x
sso_registration_scopes = sso:account:access
{code}
 

```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34469][table] Implement TableDistribution toString [flink]

2024-02-21 Thread via GitHub


JingGe commented on code in PR #24338:
URL: https://github.com/apache/flink/pull/24338#discussion_r1498379848


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/TableDistribution.java:
##
@@ -142,4 +142,9 @@ && getBucketCount().get() != 0) {
 sb.append("\n");
 return sb.toString();
 }
+
+@Override
+public String toString() {
+return asSerializableString();

Review Comment:
   1. If this is the requirement of the `toString()` method. We shoul consider 
marking the `asSerializableString()` as private and replace any external call 
of  `asSerializableString()` with `toString()`.  
   2. Could you also add a test for it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34469][table] Implement TableDistribution toString [flink]

2024-02-21 Thread via GitHub


JingGe commented on code in PR #24338:
URL: https://github.com/apache/flink/pull/24338#discussion_r1498379848


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/TableDistribution.java:
##
@@ -142,4 +142,9 @@ && getBucketCount().get() != 0) {
 sb.append("\n");
 return sb.toString();
 }
+
+@Override
+public String toString() {
+return asSerializableString();

Review Comment:
   1. If this is the requirement of `toString()` method. We shoul consider 
marking `asSerializableString()` as private and replace any external call of  
`asSerializableString()` with `toString()`.  
   2. Could you also add a test for it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34481][table] Migrate SetOpRewriteUtil to java [flink]

2024-02-21 Thread via GitHub


JingGe commented on code in PR #24358:
URL: https://github.com/apache/flink/pull/24358#discussion_r1498366852


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/utils/SetOpRewriteUtil.java:
##
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.utils;
+
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.planner.calcite.FlinkRelBuilder;
+import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction;
+
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.util.Util;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/** Util class that rewrite [[SetOp]]. */

Review Comment:
   ```suggestion
   /** Util class that rewrite {@link org.apache.calcite.rel.core.SetOp}. */
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34477][table] Support capture groups in REGEXP_REPLACE [flink]

2024-02-21 Thread via GitHub


flinkbot commented on PR #24363:
URL: https://github.com/apache/flink/pull/24363#issuecomment-1957839900

   
   ## CI report:
   
   * ee3ee8ec8373aa820310d5bcf88ecd9ac5c3bfc7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34477) support capture groups in REGEXP_REPLACE

2024-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34477:
---
Labels: pull-request-available  (was: )

> support capture groups in REGEXP_REPLACE
> 
>
> Key: FLINK-34477
> URL: https://issues.apache.org/jira/browse/FLINK-34477
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.1
>Reporter: David Anderson
>Priority: Major
>  Labels: pull-request-available
>
> For example, I would expect this query
> {code:java}
> select REGEXP_REPLACE('ERR1,ERR2', '([^,]+)', 'AA$1AA'); {code}
> to produce
> {code:java}
> AAERR1AA,AAERR2AA{code}
> but instead it produces
> {code:java}
> AA$1AA,AA$1AA{code}
> With FLINK-9990 support was added for REGEXP_EXTRACT, which does provide 
> access to the capture groups, but for many use cases supporting this in the 
> way that users expect, in REGEXP_REPLACE, would be more natural and 
> convenient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34477][table] Support capture groups in REGEXP_REPLACE [flink]

2024-02-21 Thread via GitHub


jeyhunkarimov opened a new pull request, #24363:
URL: https://github.com/apache/flink/pull/24363

   ## What is the purpose of the change
   
   Support capture groups in REGEXP_REPLACE
   
   ## Brief change log
   
 - Added support for capture groups by skipping the dollar sign
 - Adjusted some tests and added new ones
   
   
   ## Verifying this change
   
   Tests: 
`org.apache.flink.table.planner.expressions.ScalarFunctionsTest::testRegexpReplace`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-02-21 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819362#comment-17819362
 ] 

Sergey Nuyanzin commented on FLINK-34148:
-

Seems I didn't get your initial question, sorry

yes that's a good question. 
To be honest I don't know the exact answer.
However what we have so far: 
1. I checked that it's reproducible only for jdk8 among LTS (didn't check 9, 
10) , e.g. for 11, 17 it is not reproducible, looks like it was fixed at some 
point on jdk level
2. I was not able to find any noticeable thing in profiler
3. Bisect shows that this perf regression comes with update of shading plugin 
(and changed behavior as mentioned above)
I tend to think that it could relate to something like classloading or SPI 
since without the fix amount of classes is bigger however as I mentioned this 
is only a hypotheses 
 

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Assignee: Sergey Nuyanzin
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-21 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819354#comment-17819354
 ] 

Ferenc Csaky commented on FLINK-34399:
--

Executed the following test scenarios:
 # Simple datagen table:
{code:sql}
CREATE TABLE IF NOT EXISTS `datagen_table` (
  `col_str` STRING,
  `col_int` INT,
  `col_ts` TIMESTAMP(3),
  WATERMARK FOR `col_ts` AS col_ts - INTERVAL '5' SECOND
) WITH (
  'connector' = 'datagen'
);
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `col_str`, `col_int`, `col_ts` FROM 
`default_catalog`.`default_database`.`datagen_table`
{code}

 # Aggreagte view:
{code:sql}
CREATE TABLE IF NOT EXISTS `txn_gen` (
  `id` INT,
  `amount` INT,
  `timestamp` TIMESTAMP(3),
  WATERMARK FOR `timestamp` AS `timestamp` - INTERVAL '1' SECOND
) WITH (
  'connector' = 'datagen',
  'fields.id.max' = '5',
  'fields.id.min' = '1',
  'rows-per-second' = '1'
);

CREATE VIEW IF NOT EXISTS `aggr_five_sec` AS SELECT
  `id`,
  COUNT(`id`) AS `txn_count`,
  TUMBLE_ROWTIME(`timestamp`, INTERVAL '5' SECOND) AS `w_row_time`
FROM `txn_gen`
GROUP BY `id`, TUMBLE(`timestamp`, INTERVAL '5' SECOND)
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `id`, `txn_count`, `w_row_time` FROM 
`default_catalog`.`default_database`.`aggr_five_sec`
{code}

 # Join view:
{code:sql}
CREATE TEMPORARY TABLE IF NOT EXISTS `location_updates` (
  `character_id` INT,
  `location` STRING,
  `proctime` AS PROCTIME()
)
WITH (
  'connector' = 'faker', 
  'fields.character_id.expression' = '#{number.numberBetween ''0'',''100''}',
  'fields.location.expression' = '#{harry_potter.location}'
);

CREATE TEMPORARY TABLE IF NOT EXISTS `characters` (
  `character_id` INT,
  `name` STRING
)
WITH (
  'connector' = 'faker', 
  'fields.character_id.expression' = '#{number.numberBetween ''0'',''100''}',
  'fields.name.expression' = '#{harry_potter.characters}'
);

CREATE TEMPORARY VIEW IF NOT EXISTS `joined` AS SELECT 
  c.character_id,
  l.location,
  c.name
FROM location_updates AS l
JOIN characters FOR SYSTEM_TIME AS OF proctime AS c
ON l.character_id = c.character_id;
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `character_id`, `location`, `name` FROM 
`default_catalog`.`default_database`.`joined`
{code}

Job execution went fine, all tests gave the expected resulst. Also checked the 
related PRs for this feature, it is very well covered with unit tests, so I 
think it looks good and works as desired.

> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: release-testing
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31664][table]Add ARRAY_INTERSECT function [flink]

2024-02-21 Thread via GitHub


hanyuzheng7 commented on PR #23171:
URL: https://github.com/apache/flink/pull/23171#issuecomment-1957585378

   @dawidwys @liuyongvs Ok


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31663][table] Add ARRAY_EXCEPT function [flink]

2024-02-21 Thread via GitHub


bvarghese1 closed pull request #22588: [FLINK-31663][table] Add ARRAY_EXCEPT 
function
URL: https://github.com/apache/flink/pull/22588


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34489) New File Sink end-to-end test timed out

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819313#comment-17819313
 ] 

Matthias Pohl edited comment on FLINK-34489 at 2/21/24 4:22 PM:


Same issue in {{Streaming File Sink end-to-end test}}:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57702=logs=dc1bf4ed-4646-531a-f094-e103042be549=fb3d654d-52f8-5b98-fe9d-b18dd2e2b790=3313


was (Author: mapohl):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57702=logs=dc1bf4ed-4646-531a-f094-e103042be549=fb3d654d-52f8-5b98-fe9d-b18dd2e2b790=3313

> New File Sink end-to-end test timed out
> ---
>
> Key: FLINK-34489
> URL: https://issues.apache.org/jira/browse/FLINK-34489
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57707=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3726
> {code}
> eb 21 07:26:03 Number of produced values 10770/6
> Feb 21 07:39:50 Test (pid: 151375) did not finish after 900 seconds.
> Feb 21 07:39:50 Printing Flink logs and killing it:
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22040) Maven: Entry has not been leased from this pool / fix for e2e tests

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819315#comment-17819315
 ] 

Matthias Pohl commented on FLINK-22040:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57725=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=2377

> Maven: Entry has not been leased from this pool / fix for e2e tests
> ---
>
> Key: FLINK-22040
> URL: https://issues.apache.org/jira/browse/FLINK-22040
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.12.2, 1.13.0, 1.15.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819316#comment-17819316
 ] 

Matthias Pohl commented on FLINK-34202:
---

The following CI failures didn't include the fix, yet, and are just added for 
documentation purposes:
* 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57700=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=27458

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.17.3, 1.18.2
>
> Attachments: Screenshot 2024-02-21 at 09.45.18.png
>
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819314#comment-17819314
 ] 

Matthias Pohl commented on FLINK-29114:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57722=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11860

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> 

[jira] [Commented] (FLINK-34489) New File Sink end-to-end test timed out

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819313#comment-17819313
 ] 

Matthias Pohl commented on FLINK-34489:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57702=logs=dc1bf4ed-4646-531a-f094-e103042be549=fb3d654d-52f8-5b98-fe9d-b18dd2e2b790=3313

> New File Sink end-to-end test timed out
> ---
>
> Key: FLINK-34489
> URL: https://issues.apache.org/jira/browse/FLINK-34489
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57707=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3726
> {code}
> eb 21 07:26:03 Number of produced values 10770/6
> Feb 21 07:39:50 Test (pid: 151375) did not finish after 900 seconds.
> Feb 21 07:39:50 Printing Flink logs and killing it:
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26644) python StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies failed on azure

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819312#comment-17819312
 ] 

Matthias Pohl commented on FLINK-26644:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57701=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=26409

> python 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies 
> failed on azure
> ---
>
> Key: FLINK-26644
> URL: https://issues.apache.org/jira/browse/FLINK-26644
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.4, 1.15.0, 1.16.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-03-14T18:50:24.6842853Z Mar 14 18:50:24 
> === FAILURES 
> ===
> 2022-03-14T18:50:24.6844089Z Mar 14 18:50:24 _ 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies _
> 2022-03-14T18:50:24.6844846Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6846063Z Mar 14 18:50:24 self = 
>   testMethod=test_generate_stream_graph_with_dependencies>
> 2022-03-14T18:50:24.6847104Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6847766Z Mar 14 18:50:24 def 
> test_generate_stream_graph_with_dependencies(self):
> 2022-03-14T18:50:24.6848677Z Mar 14 18:50:24 python_file_dir = 
> os.path.join(self.tempdir, "python_file_dir_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6849833Z Mar 14 18:50:24 os.mkdir(python_file_dir)
> 2022-03-14T18:50:24.6850729Z Mar 14 18:50:24 python_file_path = 
> os.path.join(python_file_dir, "test_stream_dependency_manage_lib.py")
> 2022-03-14T18:50:24.6852679Z Mar 14 18:50:24 with 
> open(python_file_path, 'w') as f:
> 2022-03-14T18:50:24.6853646Z Mar 14 18:50:24 f.write("def 
> add_two(a):\nreturn a + 2")
> 2022-03-14T18:50:24.6854394Z Mar 14 18:50:24 env = self.env
> 2022-03-14T18:50:24.6855019Z Mar 14 18:50:24 
> env.add_python_file(python_file_path)
> 2022-03-14T18:50:24.6855519Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6856254Z Mar 14 18:50:24 def plus_two_map(value):
> 2022-03-14T18:50:24.6857045Z Mar 14 18:50:24 from 
> test_stream_dependency_manage_lib import add_two
> 2022-03-14T18:50:24.6857865Z Mar 14 18:50:24 return value[0], 
> add_two(value[1])
> 2022-03-14T18:50:24.6858466Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6858924Z Mar 14 18:50:24 def add_from_file(i):
> 2022-03-14T18:50:24.6859806Z Mar 14 18:50:24 with 
> open("data/data.txt", 'r') as f:
> 2022-03-14T18:50:24.6860266Z Mar 14 18:50:24 return i[0], 
> i[1] + int(f.read())
> 2022-03-14T18:50:24.6860879Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6862022Z Mar 14 18:50:24 from_collection_source = 
> env.from_collection([('a', 0), ('b', 0), ('c', 1), ('d', 1),
> 2022-03-14T18:50:24.6863259Z Mar 14 18:50:24  
>  ('e', 2)],
> 2022-03-14T18:50:24.6864057Z Mar 14 18:50:24  
> type_info=Types.ROW([Types.STRING(),
> 2022-03-14T18:50:24.6864651Z Mar 14 18:50:24  
>  Types.INT()]))
> 2022-03-14T18:50:24.6865150Z Mar 14 18:50:24 
> from_collection_source.name("From Collection")
> 2022-03-14T18:50:24.6866212Z Mar 14 18:50:24 keyed_stream = 
> from_collection_source.key_by(lambda x: x[1], key_type=Types.INT())
> 2022-03-14T18:50:24.6867083Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6867793Z Mar 14 18:50:24 plus_two_map_stream = 
> keyed_stream.map(plus_two_map).name("Plus Two Map").set_parallelism(3)
> 2022-03-14T18:50:24.6868620Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6869412Z Mar 14 18:50:24 add_from_file_map = 
> plus_two_map_stream.map(add_from_file).name("Add From File Map")
> 2022-03-14T18:50:24.6870239Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6870883Z Mar 14 18:50:24 test_stream_sink = 
> add_from_file_map.add_sink(self.test_sink).name("Test Sink")
> 2022-03-14T18:50:24.6871803Z Mar 14 18:50:24 
> test_stream_sink.set_parallelism(4)
> 2022-03-14T18:50:24.6872291Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6872756Z Mar 14 18:50:24 archive_dir_path = 
> os.path.join(self.tempdir, "archive_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6873557Z Mar 14 18:50:24 
> os.mkdir(archive_dir_path)
> 2022-03-14T18:50:24.6874817Z Mar 14 18:50:24 with 
> open(os.path.join(archive_dir_path, "data.txt"), 'w') as f:
> 2022-03-14T18:50:24.6875414Z Mar 14 18:50:24 f.write("3")

[jira] [Commented] (FLINK-27916) HybridSourceReaderTest.testReader failed with AssertionError

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819311#comment-17819311
 ] 

Matthias Pohl commented on FLINK-27916:
---

1.17: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57700=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=21eae189-b04c-5c04-662b-17dc80ffc83a=7429

> HybridSourceReaderTest.testReader failed with AssertionError
> 
>
> Key: FLINK-27916
> URL: https://issues.apache.org/jira/browse/FLINK-27916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0, 1.19.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Attachments: Screen Shot 2022-07-21 at 5.51.40 PM.png
>
>
> {code:java}
> 2022-06-05T07:47:33.3332158Z Jun 05 07:47:33 [ERROR] Tests run: 3, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 2.03 s <<< FAILURE! - in 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest
> 2022-06-05T07:47:33.3334366Z Jun 05 07:47:33 [ERROR] 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader
>   Time elapsed: 0.108 s  <<< FAILURE!
> 2022-06-05T07:47:33.3335385Z Jun 05 07:47:33 java.lang.AssertionError: 
> 2022-06-05T07:47:33.3336049Z Jun 05 07:47:33 
> 2022-06-05T07:47:33.3336682Z Jun 05 07:47:33 Expected size: 1 but was: 0 in:
> 2022-06-05T07:47:33.3337316Z Jun 05 07:47:33 []
> 2022-06-05T07:47:33.3338437Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.assertAndClearSourceReaderFinishedEvent(HybridSourceReaderTest.java:199)
> 2022-06-05T07:47:33.3340082Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader(HybridSourceReaderTest.java:96)
> 2022-06-05T07:47:33.3341373Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-05T07:47:33.3342540Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-05T07:47:33.3344124Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-05T07:47:33.3345283Z Jun 05 07:47:33  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-06-05T07:47:33.3346804Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-06-05T07:47:33.3348218Z Jun 05 07:47:33  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-06-05T07:47:33.3349495Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-06-05T07:47:33.3350779Z Jun 05 07:47:33  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-06-05T07:47:33.3351956Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3357032Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-06-05T07:47:33.3358633Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-06-05T07:47:33.3360003Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-06-05T07:47:33.3361924Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-06-05T07:47:33.3363427Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-06-05T07:47:33.3364793Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-06-05T07:47:33.3365619Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-06-05T07:47:33.3366254Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-06-05T07:47:33.3366939Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-06-05T07:47:33.3367556Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3368268Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-06-05T07:47:33.3369166Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-06-05T07:47:33.3369993Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-06-05T07:47:33.3371021Z Jun 05 07:47:33  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-06-05T07:47:33.3372128Z Jun 05 07:47:33  at 
> 

[jira] [Created] (FLINK-34489) New File Sink end-to-end test timed out

2024-02-21 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34489:
-

 Summary: New File Sink end-to-end test timed out
 Key: FLINK-34489
 URL: https://issues.apache.org/jira/browse/FLINK-34489
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.19.0, 1.20.0
Reporter: Matthias Pohl


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57707=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3726

{code}
eb 21 07:26:03 Number of produced values 10770/6
Feb 21 07:39:50 Test (pid: 151375) did not finish after 900 seconds.
Feb 21 07:39:50 Printing Flink logs and killing it:
[...]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34471) Tune network memory as part of Autoscaler Memory Tuning

2024-02-21 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819306#comment-17819306
 ] 

Maximilian Michels commented on FLINK-34471:


I think in addition to the fine-grained approach described in the doc, we can 
do a first implementation which simply uses 
[https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/network_mem_tuning/#network-buffer-lifecycle]
 and assumes an ALL_TO_ALL relationship. This may not optimize down to the last 
byte but still gives great savings over the default.

> Tune network memory as part of Autoscaler Memory Tuning
> ---
>
> Key: FLINK-34471
> URL: https://issues.apache.org/jira/browse/FLINK-34471
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Maximilian Michels
>Priority: Major
>
> Design doc: 
> https://docs.google.com/document/d/19HYamwMaYYYOeH3NRbk6l9P-bBLBfgzMYjfGEPWEbeo/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34471) Tune network memory as part of Autoscaler Memory Tuning

2024-02-21 Thread Maximilian Michels (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-34471:
---
Summary: Tune network memory as part of Autoscaler Memory Tuning  (was: 
Tune the network memory in Autoscaler)

> Tune network memory as part of Autoscaler Memory Tuning
> ---
>
> Key: FLINK-34471
> URL: https://issues.apache.org/jira/browse/FLINK-34471
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Maximilian Michels
>Priority: Major
>
> Design doc: 
> https://docs.google.com/document/d/19HYamwMaYYYOeH3NRbk6l9P-bBLBfgzMYjfGEPWEbeo/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34471) Tune the network memory in Autoscaler

2024-02-21 Thread Maximilian Michels (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-34471:
---
Summary: Tune the network memory in Autoscaler  (was: Tune the network 
memroy in Autoscaler)

> Tune the network memory in Autoscaler
> -
>
> Key: FLINK-34471
> URL: https://issues.apache.org/jira/browse/FLINK-34471
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Maximilian Michels
>Priority: Major
>
> Design doc: 
> https://docs.google.com/document/d/19HYamwMaYYYOeH3NRbk6l9P-bBLBfgzMYjfGEPWEbeo/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34152] Tune all directly observable memory types [flink-kubernetes-operator]

2024-02-21 Thread via GitHub


mxm commented on PR #778:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/778#issuecomment-1957044189

   @1996fanrui I've addressed your comments. Please have another look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34363] Set artifact version to $project_version if $FLINK_VERSION is not set [flink-connector-shared-utils]

2024-02-21 Thread via GitHub


zentol commented on code in PR #38:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/38#discussion_r1497786866


##
_utils.sh:
##
@@ -68,4 +68,15 @@ function set_pom_version {
   new_version=$1
 
   ${MVN} org.codehaus.mojo:versions-maven-plugin:2.8.1:set 
-DnewVersion=${new_version} -DgenerateBackupPoms=false --quiet
+}
+
+function is_flink_version_set_in_pom {
+  set +u
+  version=$(${MVN} help:evaluate -Dexpression="flink.version" -q -DforceStdout)
+  if [ -n "${version}" ]; then

Review Comment:
   This isn't quite reliable because it fails if the pom doesn't contain a 
reference to `flink.version` at all (e.g. the Flink repo).
   There it prints `null object or invalid expression`.
   It works for the parent pom because it does define an empty `flink.version` 
property, mostly for documentation purposes.
   Would be good to harden this so we don't fall into a trap in the future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-21 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-34202.
---
Fix Version/s: 1.19.0
   1.17.3
   1.18.2
   Resolution: Fixed

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.17.3, 1.18.2
>
> Attachments: Screenshot 2024-02-21 at 09.45.18.png
>
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819278#comment-17819278
 ] 

Matthias Pohl commented on FLINK-34202:
---

Thanks Lorenzo for looking into it. I guess, we won't be able to verify anymore 
whether this resolved the issue because of optimization [~hxb] already provided 
for the Python stage. I'm gonna close this issue. Thanks everyone for 
participating.

master: 
[95163e89c84edbbad8477715739d3e5f2d80615e|https://github.com/apache/flink/commit/95163e89c84edbbad8477715739d3e5f2d80615e]
1.19: 
[a25fca9cc9342c59f8e2c5e5b5a17e5f875e9732|https://github.com/apache/flink/commit/a25fca9cc9342c59f8e2c5e5b5a17e5f875e9732]
1.18: 
[5c16321b8064859d68004462410c8fcfd3ca0b2f|https://github.com/apache/flink/commit/5c16321b8064859d68004462410c8fcfd3ca0b2f]
1.17: 
[dfc85f6544cf196c6e6e09c3f9e71c936b89d45c|https://github.com/apache/flink/commit/dfc85f6544cf196c6e6e09c3f9e71c936b89d45c]

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: Screenshot 2024-02-21 at 09.45.18.png
>
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34488) Integrate snapshot deployment into GHA nightly workflow

2024-02-21 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34488:
-

 Summary: Integrate snapshot deployment into GHA nightly workflow
 Key: FLINK-34488
 URL: https://issues.apache.org/jira/browse/FLINK-34488
 Project: Flink
  Issue Type: Sub-task
  Components: Build System / CI
Affects Versions: 1.18.1, 1.19.0, 1.20.0
Reporter: Matthias Pohl


Analogously to the [Azure Pipelines nightly 
config|https://github.com/apache/flink/blob/e923d4060b6dabe650a8950774d176d3e92437c2/tools/azure-pipelines/build-apache-repo.yml#L103]
 we want to deploy the snapshot artifacts in the GHA nightly workflow as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-21 Thread Lorenzo Affetti (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819273#comment-17819273
 ] 

Lorenzo Affetti edited comment on FLINK-34202 at 2/21/24 2:18 PM:
--

[~mapohl] thanks for pointing that out.

Initially I did not understand this was happening only on CI001, my bad.

I found out that the Nexus service was consuming an enormous amount of CPU on 
CI001.

I restarted and updated the service right now, the issue should disappear.

Please don't hesitate to contact me if something similar happens again(y)


was (Author: JIRAUSER304233):
[~mapohl] thanks for pointing that out.

Initially I did not understand this was happening only on CI001, my bad.

I found out that the Nexus service was consuming an enormous amount of CPU on 
CI001.

I restarted and updated the service right now, the issue should disappear.

Please don't hesitate to contact me again if something similar happens again(y)

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: Screenshot 2024-02-21 at 09.45.18.png
>
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-21 Thread Lorenzo Affetti (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819273#comment-17819273
 ] 

Lorenzo Affetti commented on FLINK-34202:
-

[~mapohl] thanks for pointing that out.

Initially I did not understand this was happening only on CI001, my bad.

I found out that the Nexus service was consuming an enormous amount of CPU on 
CI001.

I restarted and updated the service right now, the issue should disappear.

Please don't hesitate to contact me again if something similar happens again(y)

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: Screenshot 2024-02-21 at 09.45.18.png
>
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34487) Integrate tools/azure-pipelines/build-python-wheels.yml into GHA nightly workflow

2024-02-21 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34487:
-

 Summary: Integrate tools/azure-pipelines/build-python-wheels.yml 
into GHA nightly workflow
 Key: FLINK-34487
 URL: https://issues.apache.org/jira/browse/FLINK-34487
 Project: Flink
  Issue Type: Sub-task
  Components: Build System / CI
Affects Versions: 1.18.1, 1.19.0, 1.20.0
Reporter: Matthias Pohl


Analogously to the [Azure Pipelines nightly 
config|https://github.com/apache/flink/blob/e923d4060b6dabe650a8950774d176d3e92437c2/tools/azure-pipelines/build-apache-repo.yml#L183]
 we want to generate the wheels artifacts in the GHA nightly workflow as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-02-21 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819269#comment-17819269
 ] 

Robert Metzger commented on FLINK-34148:


Thanks, but how does the broken shading behavior cause a performance regression 
in the {{StringSerializationBenchmark}} ?

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Assignee: Sergey Nuyanzin
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-02-21 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819267#comment-17819267
 ] 

Sergey Nuyanzin commented on FLINK-34148:
-

[~rmetzger] yes, there is FLINK-34234 under which there is a R addressing this 
issue on flink-shaded side

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Assignee: Sergey Nuyanzin
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-02-21 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819265#comment-17819265
 ] 

Robert Metzger commented on FLINK-34148:


Thanks for addressing this issue.
Maybe I'm overlooking something in the discussion or related tickets, but have 
we understood what caused the performance regression in flink-shaded 1.18? E.g. 
do we know what we need to fix flink-shaded to move to flink-shaded 1.18.1 or 
1.19?

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Assignee: Sergey Nuyanzin
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34331][ci] Adds reusable workflow that is used to load the runner configuration based on the projects owner [flink]

2024-02-21 Thread via GitHub


flinkbot commented on PR #24362:
URL: https://github.com/apache/flink/pull/24362#issuecomment-1956693550

   
   ## CI report:
   
   * 75374e85405a03ac08ae1d9cdd227b52d3e0f9ec UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >